CN115097827A - Road learning method for unmanned automobile - Google Patents

Road learning method for unmanned automobile Download PDF

Info

Publication number
CN115097827A
CN115097827A CN202210703298.7A CN202210703298A CN115097827A CN 115097827 A CN115097827 A CN 115097827A CN 202210703298 A CN202210703298 A CN 202210703298A CN 115097827 A CN115097827 A CN 115097827A
Authority
CN
China
Prior art keywords
section
real
learning
stepping
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210703298.7A
Other languages
Chinese (zh)
Other versions
CN115097827B (en
Inventor
陈洁
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intelligent Networked Automobile Shandong Collaborative Innovation Research Institute Co ltd
Original Assignee
Intelligent Networked Automobile Shandong Collaborative Innovation Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intelligent Networked Automobile Shandong Collaborative Innovation Research Institute Co ltd filed Critical Intelligent Networked Automobile Shandong Collaborative Innovation Research Institute Co ltd
Priority to CN202210703298.7A priority Critical patent/CN115097827B/en
Publication of CN115097827A publication Critical patent/CN115097827A/en
Application granted granted Critical
Publication of CN115097827B publication Critical patent/CN115097827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a road learning method of an unmanned automobile, which is characterized in that a plurality of section angles to be stepped and deceleration information formed by corresponding upper limit speed combinations are determined mainly through brake learning by two modes of unobstructed section learning and conventional section learning; determining a deceleration section for deceleration braking according to the content of the relevant real-time speed when different upper limit speeds are driven in real time; then, performing conventional road section learning, mainly performing following learning, mainly locking a license plate, marking the license plate as a feature point, determining an initial feature photo and a real-time feature photo group, and then determining how to decelerate according to the distance between the same-row object and the unmanned vehicle and the change of the actual initial difference value Zi of the same-row object; the method disclosed by the application can provide a better safe driving mode for the unmanned automobile when the unmanned automobile runs on a fixed path.

Description

Road learning method for unmanned automobile
Technical Field
The invention relates to the technical field of unmanned automobiles, in particular to a road learning method of an unmanned automobile.
Background
Unmanned vehicles have not become popular in people's lives because of technical limitations, but have begun to be used slowly in certain roads or enclosed parks.
The invention patent with patent publication number CN111273668A discloses a system for planning the motion trail of an unmanned vehicle for a structured road, which comprises a sensing module, a positioning module, a lane change decision module, a motion planning module and a trail tracking module; the lane change decision module outputs decision actions according to data collected by the sensing module and the positioning module; the motion planning module outputs an optimal track to the track tracking module according to the decision-making action, the invention uses deep reinforcement learning to make a decision, and modules for planning the track, sensing, controlling and the like based on decision-making action dynamics are independently processed, so that the interpretability and operability of the decision-making planning process are greatly improved compared with an end-to-end method, the method can be well adapted to the system architecture of the existing unmanned automobile, but the driving route of the unmanned automobile used for a specific road or a closed park is fixed, and the running mode of the unmanned automobile can be more efficient by learning the route and processing the real-time road condition.
Of course, for unmanned vehicles, a solution is provided based on how to determine driving strategies, lack a correlation between when to slow down and how to follow the vehicle when performing a drive over a fixed route.
Disclosure of Invention
The invention aims to provide a road learning method of an unmanned automobile;
the purpose of the invention can be realized by the following technical scheme:
a road learning method for an unmanned vehicle comprises the following steps:
step one, dividing road learning into two stages, namely unobstructed road section learning and conventional road section learning;
step two, study on the unobstructed road section, and the concrete study mode is as follows:
s1: acquiring a map of a corresponding unobstructed road section, and constructing a road section model, wherein the road section model comprises a path route, a stop sign and a path speed limit;
s2: then, brake learning is carried out according to the stop mark, the angle of the brake stepping to the bottom is marked as a full brake angle, and then n real stepping sections Ci are learned, wherein i =1.. n; after a deceleration section is determined according to the upper limit speed, according to the requirement of the deceleration section, decelerating the unmanned automobile from a plurality of upper limit speeds to a section angle needing to be stepped after zero to obtain a plurality of section angles needing to be stepped and corresponding upper limit speeds thereof, and marking the section angles as deceleration information;
s3: after the braking learning is finished, automatically learning once according to the path length, the path speed limit and the stop mark, braking at the braking stage according to the deceleration information of the braking learning, automatically identifying the traffic lights at the intersection of the traffic lights, and driving according to the intersection rule;
s4: finishing learning in the unobstructed road section;
step three, performing conventional road section learning, mainly performing following learning, mainly locking a license plate, marking the license plate as a feature point, determining an initial feature photo and a real-time feature photo group, and then determining a real-time pixel point group Pi, i =1.. X2 and an initial pixel point to determine a real initial difference value Zi, i =1.. X2 according to the distance between a co-traveling object and an unmanned vehicle; when the value Zi is not continuously reduced, judging the value Zi as a peer object, and then determining how to reduce the speed according to the change of the real initial difference value Zi of the peer object;
determining an allowable road section according to the route, and generating an observation signal according to the relation between the real-time speed and the upper limit speed of the unmanned automobile; when an observation signal is generated, determining the relation between an allowed road section and the spacing distance and the distance between a rear vehicle and the current unmanned automobile, and determining a road section to be changed; and carrying out targeted lane change treatment;
and step four, driving the unmanned automobile on the set road section according to the learned automobile driving.
Further, the learning of the unobstructed road section in the first step means that the learning is performed under the condition that no person is in the specified road section, and generally means that the human flow is lower than X1, wherein X1 is a preset value;
the conventional road section learning refers to a road section at the time of normal driving.
Further, the stop signs in step S1 refer to various types of traffic lights or the remaining blocking objects, and the uniform mark that causes the vehicle to decelerate to a stop is marked as the stop sign.
Further, the brake learning in step S2 specifically includes:
s201: acquiring the highest speed of the road section, marking the highest speed as an upper limit speed, and then setting a lower limit speed preset by a manager, wherein the value of the lower limit speed is usually 25 kilometers per hour or 30 kilometers per hour;
s202: then subtracting the lower limit speed from the upper limit speed to obtain a zone speed value;
s203: then, when the brake is decelerated, the stepping angle is marked as a deceleration angle, and the angle of stepping the brake to the bottom is marked as a full brake angle;
s204: taking the deceleration angle as an actual stepping deceleration angle according to every X1 degrees, wherein X1 is a preset value and is usually 5 degrees; dividing the brake full angle by the real stepping reduction angle, then rounding, obtaining a numerical value after rounding, marking the numerical value as n, obtaining n real stepping sections Ci, i =1.. n, wherein the angle of the last real stepping section Cn is the remainder of the brake full angle divided by the real stepping reduction angle, and adding the real stepping reduction angle;
s205: then, acquiring a stopping target, and setting a deceleration section, wherein the deceleration section is a specified distance in front of the stopping target and is generally determined according to an upper limit speed;
s206: and in the corresponding deceleration section, real-time learning of the real stepping section Ci is carried out according to the upper limit speed, and the specific learning mode is as follows:
sequentially stepping on angles corresponding to a section of real stepping section Ci, namely sequentially selecting from i =1 to i = n until the upper limit speed can be reduced to zero at the first 0.5m of the stop mark;
learning the real stepping section Ci of the upper limit speed at the moment, marking the real stepping section as a stepping angle, and marking the stepping angle and the corresponding upper limit speed as deceleration information;
s207: and then, all the upper limit speeds are learned to obtain all the deceleration information.
Further, the specific manner of determining the deceleration section in step S205 is as follows:
when the upper limit speed is 30 Km/h, the deceleration section is L1 m, L1 is a preset value, and the value of L1 is determined by referring to the braking distance, wherein the specific value can be 10 m or other values;
when the upper limit speed is 40 Km/h, the deceleration section is L1 × 1.2;
when the upper limit speed is 60 Km/h, the deceleration section is L1 multiplied by 2;
when the upper limit speed is 80 Km/h, the deceleration section is L1 multiplied by 3.
Further, the following learning mode in step three is specifically as follows:
SS 1: arranging a plurality of cameras on the unmanned automobile to obtain a weekly testing video of the unmanned automobile;
SS 2: firstly, acquiring a weekly test video, wherein the weekly test video is in a range which can be acquired by a camera arranged on an unmanned automobile;
SS 3: and then dynamically analyzing the weekly video, wherein the specific dynamic analysis mode is as follows:
SS 31: when the unmanned automobile runs, a photo of an object right in front of the unmanned automobile is obtained, a feature point is selected optionally, a license plate can be selected, and comparison is performed through license plate characteristics, wherein the specific comparison mode is as follows: a plurality of license plates of different types are arranged in a pre-storage library, and if the similarity exceeds a set threshold value through comparison, a selected signal is generated;
SS 32: then, the photos of the feature points are marked as feature photos, and the feature photos obtained for the first time are marked as initial phase feature photos;
SS 33: after T1 time, taking T1 as a preset value, acquiring the feature photos again, marking the feature photos as real-time feature photos, and continuously acquiring X2 groups to obtain real-time feature photo groups; comparing the real-time characteristic photo group with the initial phase characteristic photo in the following way:
acquiring pixel points occupied by license plate numbers in the real-time characteristic photo group, and marking the pixel points as a real-time pixel point group; similarly acquiring pixel points corresponding to the initial phase characteristic photos, and marking the pixel points as initial phase pixel points;
marking a real-time pixel group as Pi, i =1.. X2; marking the initial phase pixel points as Cp; subtracting Cp from Pi to obtain a real difference value Zi, i =1.. X2; when the value Zi is not continuously reduced, judging the value Zi as a peer object, wherein the primary emphasis of the peer object is an automobile;
SS 34: acquiring a front co-traveling object, when the real initial difference value of the front co-traveling object is continuously increased, subtracting the value of Cp from Pi, dividing the value by Cp, marking the obtained value as a difference increasing ratio, and when the difference increasing ratio exceeds X3, X3 is a preset value, automatically generating a speed reduction signal, wherein the speed reduction ratio is that the speed is reduced by forty percent; and then if the difference increasing ratio is continuously increased, generating a braking signal, automatically setting the real stepping section Ci of the brake to Cn, and when the speed is reduced to be below 10 Km/h, changing the real stepping section into a comfortable real stepping section, wherein the comfortable real stepping section is obtained in a specific mode:
arranging an experimental article on a vehicle seat, wherein the weight of the experimental article is set by an administrator, the experimental article is sequentially reduced from the highest real stepping section downwards in 10 Km/h, when the speed of an unmanned vehicle is reduced to zero, the forward moving distance of the experimental article is obtained, and when the forward moving distance is less than or equal to X4, the value of X4 can be 10 cm; marking the real stepping section at the moment as a comfortable stepping section;
SS 4: synchronously performing lane change analysis, determining that a road section which can be currently steered is automatically acquired when next turning is required according to a path in the road section model and the path, and marking the road section as an allowable road section;
SS 5: when the difference gain ratio variation range in front of the current road section is lower than a preset range, the preset range is set by a manager, and when the real-time speed of the unmanned automobile is lower than seventy percent of the upper limit speed, an observation signal is automatically generated;
SS 6: when an observation signal is generated, automatically acquiring vehicles in front of all allowed road sections, according to pixel point conversion distances, marking the distance between a barrier nearest to the front and a current unmanned vehicle as a spacing distance, when the spacing distance exceeds a threshold distance, taking the allowed road section with the largest spacing distance in the allowed road sections, marking the allowed road section with the largest spacing distance as a road section for changing when the current unmanned vehicle has no vehicle in the same trip and the distance between a rear vehicle and the current unmanned vehicle exceeds a second threshold distance;
the threshold distance and the second threshold distance are numerical values preset by a manager;
SS 7: and changing the driverless automobile to a road changing section.
The invention has the beneficial effects that:
according to the invention, through two modes of unobstructed road section learning and conventional road section learning, when the conventional road section is learned, a plurality of section angles to be stepped and the corresponding upper limit speed combination form deceleration information are determined mainly through brake learning; determining the current real-time speed to be above the upper limit speed and below the upper limit speed according to the content of the related real-time speed when different upper limit speeds are driven in real time, and then selecting the corresponding upper limit speed below the corresponding upper limit speed to carry out deceleration braking;
then, performing conventional road section learning, mainly performing following learning, mainly locking a license plate, marking the license plate as a feature point, determining an initial feature photo and a real-time feature photo group, and then determining how to decelerate according to the distance between the same-row object and the unmanned vehicle and the change of the actual initial difference value Zi of the same-row object; and then determining whether to change the lane according to the condition of the allowed road, wherein the method disclosed by the application can provide a better safe driving mode for the unmanned automobile when the unmanned automobile runs on a fixed path.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a block flow diagram of the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a road learning method of an unmanned automobile, which comprises the following steps:
step one, road learning is divided into two stages, namely unobstructed road section learning and conventional road section learning, wherein the unobstructed road section learning is performed under the condition that no people or few people exist in a specified road section, generally refers to the condition that the human flow is lower than X1, and X1 is a preset value and can be performed at night; generally, the treatment is carried out when the number of people and vehicles on the road is lower than the preset number of people at zero or in the morning;
step two, study on the unobstructed road section, and the concrete study mode is as follows:
s1: acquiring a map of a corresponding unobstructed road section, and constructing a road section model, wherein the road section model comprises a path route, a stop sign and a path speed limit; the stop signs refer to various traffic lights or other blocking objects, and the unified mark which causes the vehicle to decelerate to stop is marked as the stop sign;
s2: and then, brake learning is carried out according to the stop mark, and the specific brake learning mode is as follows:
s201: acquiring the highest speed of the road section, marking the highest speed as an upper limit speed, and then setting a lower limit speed preset by a manager, wherein the value of the lower limit speed is usually 25 kilometers per hour or 30 kilometers per hour;
s202: then subtracting the lower limit speed from the upper limit speed to obtain a zone speed value;
s203: then, when the brake is decelerated, the stepping angle is marked as a deceleration angle, and the angle of stepping the brake to the bottom is marked as a full brake angle;
s204: taking the deceleration angle as an actual stepping deceleration angle according to every X1 degrees, wherein X1 is a preset value and is usually 5 degrees; dividing the brake full angle by the real stepping reduction angle, then rounding, obtaining a numerical value after rounding, marking the numerical value as n, obtaining n real stepping sections Ci, i =1.. n, wherein the angle of the last real stepping section Cn is the remainder of the brake full angle divided by the real stepping reduction angle, and adding the real stepping reduction angle;
s205: then, acquiring a stopping target, and setting a deceleration section, wherein the deceleration section is a specified distance in front of the stopping target and is usually determined according to an upper limit speed, and the specific determination method is as follows:
when the upper limit speed is 30 Km/h, the deceleration section is L1 m, L1 is a preset value, and the value of L1 is determined by referring to the braking distance, wherein the specific value can be 10 m or other values;
when the upper limit speed is 40 Km/h, the deceleration section is L1 × 1.2;
when the upper limit speed is 60 Km/h, the deceleration section is L1 multiplied by 2;
when the upper limit speed is 80 Km/h, the deceleration section is L1 multiplied by 3;
s206: in the corresponding deceleration section, real stepping section Ci real-time learning is carried out according to the upper limit speed, during the selection of the deceleration section, the current real-time speed is determined to be above the upper limit speed and below the upper limit speed according to the content of the relevant real-time speed when the vehicle is driven in real time according to different upper limit speeds, and then the corresponding deceleration section below the corresponding upper limit speed is selected for deceleration braking; the specific learning mode is as follows:
sequentially stepping on angles corresponding to a section of real stepping section Ci, namely sequentially selecting from i =1 to i = n until the upper limit speed can be reduced to zero at 0.5m before the stop mark;
learning the real stepping section Ci of the upper limit speed at the moment, marking the real stepping section as a stepping angle, and marking the stepping angle and the corresponding upper limit speed as deceleration information;
s207: then, all the upper limit speeds are learned to obtain all the deceleration information;
s3: after the brake learning is finished, automatically learning once according to the path length, the path speed limit and the stop mark, braking at the brake stage according to the deceleration information of the brake learning, automatically identifying the traffic light at the intersection of the traffic light, and driving according to the intersection rule;
s4: finishing learning in the unobstructed road section;
step three, performing conventional road section learning, mainly performing car following learning, and learning emergency braking, car following speed and lane change adjustment when any sudden blocking mark exists on the path length, wherein the adjustment mode specifically comprises the following steps:
SS 1: arranging a plurality of cameras on the unmanned automobile to obtain a weekly testing video of the unmanned automobile;
SS 2: firstly, acquiring a weekly test video, wherein the weekly test video is in a range which can be acquired by a camera arranged on an unmanned automobile;
SS 3: and then dynamically analyzing the weekly video, wherein the specific dynamic analysis mode is as follows:
SS 31: when the unmanned vehicle drives, a photo of an object right in front of the unmanned vehicle is acquired, a feature point is selected optionally, a license plate can be selected, and comparison is performed through license plate characteristics, wherein the specific comparison mode is as follows: a plurality of types of license plates are arranged in a pre-storage library, and if the similarity exceeds a set threshold value through comparison, a selected signal is generated;
SS 32: then, the photos of the feature points are marked as feature photos, and the feature photos obtained for the first time are marked as initial phase feature photos;
SS 33: after T1 time, taking T1 as a preset value, acquiring the feature photos again, marking the feature photos as real-time feature photos, and continuously acquiring X2 groups to obtain real-time feature photo groups; comparing the real-time characteristic photo group with the initial phase characteristic photo in the following way:
acquiring pixel points occupied by license plate numbers in the real-time characteristic photo group, and marking the pixel points as a real-time pixel point group; similarly acquiring pixel points corresponding to the initial phase feature photos, and marking the pixel points as initial phase pixel points;
marking a real-time pixel group as Pi, i =1.. X2; marking the initial phase pixel points as Cp; subtracting Cp from Pi to obtain a real difference value Zi, i =1.. X2; when the value Zi is not continuously reduced, the automobile is judged as a co-traveling object, the main side of the co-traveling object is the automobile, and the phenomenon that the license plate is judged by mistake is avoided;
SS 34: acquiring a front co-traveling object, when the real initial difference value of the front co-traveling object is continuously increased, subtracting the value of Cp from Pi, dividing the value by Cp, marking the obtained value as a difference increasing ratio, and when the difference increasing ratio exceeds X3, X3 is a preset value, automatically generating a speed reduction signal, wherein the speed reduction ratio is that the speed is reduced by forty percent; and then if the difference increasing ratio is continuously increased, generating a braking signal, automatically setting the real stepping section Ci of the brake to Cn, and when the speed is reduced to be below 10 Km/h, changing the real stepping section into a comfortable real stepping section, wherein the comfortable real stepping section is obtained in a specific mode:
arranging an experimental article on a vehicle seat, wherein the weight of the experimental article is set by an administrator, the experimental article is sequentially reduced from the highest real stepping section downwards in 10 Km/h, when the speed of an unmanned vehicle is reduced to zero, the forward moving distance of the experimental article is obtained, and when the forward moving distance is less than or equal to X4, the value of X4 can be 10 cm; marking the real stepping section at the moment as a comfortable stepping section;
SS 4: synchronously performing lane change analysis, determining that a road section which can be currently steered is automatically acquired when next turning is required according to a path in the road section model and the path, and marking the road section as an allowable road section;
SS 5: when the difference gain ratio variation range in front of the current road section is lower than a preset range, the preset range is set by a manager, and when the real-time speed of the unmanned automobile is lower than seventy percent of the upper limit speed, an observation signal is automatically generated;
SS 6: when an observation signal is generated, automatically acquiring vehicles in front of all allowed road sections, according to pixel point conversion distances, marking the distance between a barrier nearest to the front and a current unmanned vehicle as a spacing distance, when the spacing distance exceeds a threshold distance, taking the allowed road section with the largest spacing distance in the allowed road sections, marking the allowed road section with the largest spacing distance as a road section for changing when the current unmanned vehicle has no vehicle in the same trip and the distance between a rear vehicle and the current unmanned vehicle exceeds a second threshold distance;
the threshold distance and the second threshold distance are numerical values preset by a manager;
SS 7: and changing the driverless automobile to a road changing section.
And step four, driving the unmanned automobile on the set road section according to the learned automobile driving.
Although one embodiment of the present invention has been described in detail, the description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.

Claims (6)

1. A road learning method of an unmanned vehicle is characterized by comprising the following steps:
step one, dividing road learning into two stages, namely unobstructed road section learning and conventional road section learning;
step two, study on the unobstructed road section, and the concrete study mode is as follows:
s1: acquiring a map of a corresponding unobstructed road section, and constructing a road section model, wherein the road section model comprises a path route, a stop sign and a path speed limit;
s2: then, brake learning is carried out according to the stop mark, the angle of the brake stepping to the bottom is marked as a full brake angle, and then n real stepping sections Ci are learned, wherein i =1.. n; after a deceleration section is determined according to the upper limit speed, according to the requirement of the deceleration section, decelerating the unmanned automobile from a plurality of upper limit speeds to a section angle needing to be stepped after zero to obtain a plurality of section angles needing to be stepped and corresponding upper limit speeds thereof, and marking the section angles as deceleration information;
s3: after the brake learning is finished, automatically learning once according to the path length, the path speed limit and the stop mark, braking at the brake stage according to the deceleration information of the brake learning, automatically identifying the traffic light at the intersection of the traffic light, and driving according to the intersection rule;
s4: finishing learning in the unobstructed road section;
step three, performing conventional road section learning, mainly performing following learning, mainly locking a license plate, marking the license plate as a feature point, determining an initial feature photo and a real-time feature photo group, and then determining a real-time pixel point group Pi, i =1.. X2 and an initial pixel point to determine a real initial difference value Zi, i =1.. X2 according to the distance between a co-traveling object and an unmanned vehicle; when the value Zi is not continuously reduced, judging the value Zi as a peer object, and then determining how to reduce the speed according to the change of the real initial difference value Zi of the peer object;
determining an allowable road section according to the route, and generating an observation signal according to the relation between the real-time speed and the upper limit speed of the unmanned automobile; when an observation signal is generated, determining the relation between an allowed road section and the spacing distance and the distance between a rear vehicle and the current unmanned automobile, and determining a road section to be changed; and carrying out targeted lane change treatment;
and step four, driving the unmanned automobile on the set road section according to the learned automobile driving.
2. The method as claimed in claim 1, wherein the learning of the unobstructed road section in the first step is performed without any person in the designated road section, generally indicating that the human flow rate is lower than X1, where X1 is a preset value;
the conventional road section learning refers to a road section at the time of normal driving.
3. The method for learning the road of the unmanned vehicle as claimed in claim 1, wherein the stop signs in the step S1 refer to various traffic lights or other blocking objects, and the unified signs that cause the vehicle to decelerate to stop are marked as stop signs.
4. The method for learning the road of the unmanned vehicle as claimed in claim 1, wherein the brake learning in step S2 is specifically performed by:
s201: acquiring the highest speed of the road section, marking the highest speed as an upper limit speed, and then setting a lower limit speed preset by a manager, wherein the value of the lower limit speed is usually 25 kilometers per hour or 30 kilometers per hour;
s202: then subtracting the lower limit speed from the upper limit speed to obtain a zone speed value;
s203: then, when the brake is decelerated, the stepping angle is marked as a deceleration angle, and the angle for stepping the brake to the bottom is marked as a full brake angle;
s204: taking the deceleration angle as an actual stepping angle according to every X1 degrees, wherein X1 is a preset value and is usually 5 degrees; dividing the brake full angle by the real stepping reduction angle, then rounding, obtaining a numerical value after rounding, marking the numerical value as n, obtaining n real stepping sections Ci, i =1.. n, wherein the angle of the last real stepping section Cn is the remainder of the brake full angle divided by the real stepping reduction angle, and adding the real stepping reduction angle;
s205: then acquiring a stopping target, and setting a deceleration section, wherein the deceleration section is a specified distance in front of the stopping target and is generally determined according to an upper limit speed;
s206: and in the corresponding deceleration section, real-time learning of the real stepping section Ci is carried out according to the upper limit speed, and the specific learning mode is as follows:
sequentially stepping on angles corresponding to a section of real stepping section Ci, namely sequentially selecting from i =1 to i = n until the upper limit speed can be reduced to zero at the first 0.5m of the stop mark;
learning the real stepping section Ci of the upper limit speed at the moment, marking the real stepping section as a stepping angle, and marking the stepping angle and the corresponding upper limit speed as deceleration information;
s207: and then, all the upper limit speeds are learned to obtain all the deceleration information.
5. The method for learning the road of the unmanned vehicle as claimed in claim 4, wherein the deceleration section is determined in step S205 by:
when the upper limit speed is 30 Km/h, the deceleration section is L1 m, L1 is a preset value, and the value of L1 is determined by referring to the braking distance, and the specific value can be 10 m or other values;
when the upper limit speed is 40 Km/h, the deceleration section is L1 × 1.2;
when the upper limit speed is 60 Km/h, the deceleration section is L1 multiplied by 2;
when the upper limit speed is 80 Km/h, the deceleration section is L1 multiplied by 3.
6. The method for learning the road of the unmanned vehicle as claimed in claim 1, wherein the following learning in step three is specifically:
SS 1: arranging a plurality of cameras on the unmanned automobile to obtain a weekly testing video of the unmanned automobile;
SS 2: firstly, acquiring a weekly video, wherein the weekly video is in a range which can be acquired by a camera arranged on an unmanned automobile;
SS 3: and then dynamically analyzing the weekly video, wherein the specific dynamic analysis mode is as follows:
SS 31: when the unmanned vehicle drives, a photo of an object right in front of the unmanned vehicle is acquired, a feature point is selected optionally, a license plate can be selected, and comparison is performed through license plate characteristics, wherein the specific comparison mode is as follows: a plurality of license plates of different types are arranged in a pre-storage library, and if the similarity exceeds a set threshold value through comparison, a selected signal is generated;
SS 32: then, the photos of the feature points are marked as feature photos, and the feature photos acquired for the first time are marked as initial phase feature photos;
SS 33: after T1 time, taking T1 as a preset value, acquiring the feature photos again, marking the feature photos as real-time feature photos, and continuously acquiring X2 groups to obtain real-time feature photo groups; comparing the real-time characteristic photo group with the initial phase characteristic photo, wherein the comparison mode is as follows:
acquiring pixel points occupied by license plate numbers in the real-time characteristic photo group, and marking the pixel points as a real-time pixel point group; similarly acquiring pixel points corresponding to the initial phase feature photos, and marking the pixel points as initial phase pixel points;
marking a real-time pixel group as Pi, i =1.. X2; the initial phase pixel point is marked as Cp; subtracting Cp from Pi to obtain a real difference value Zi, i =1.. X2; when the value Zi is not continuously reduced, judging the value Zi as a peer object, wherein the primary emphasis of the peer object is an automobile;
SS 34: acquiring a front co-traveling object, when the real initial difference value of the front co-traveling object is continuously increased, subtracting the value of Cp from Pi, dividing the value by Cp, marking the obtained value as a difference increasing ratio, and when the difference increasing ratio exceeds X3, X3 is a preset value, automatically generating a speed reduction signal, wherein the speed reduction ratio is that the speed is reduced by forty percent; and then if the difference increasing ratio is continuously increased, generating a braking signal, automatically setting the real stepping section Ci of the brake to Cn, and changing the real stepping section into a comfortable and real stepping section when the speed is reduced to be below 10 Km/h, wherein the specific method for acquiring the comfortable and real stepping section is as follows:
arranging an experimental article on a vehicle seat, wherein the weight of the experimental article is set by a manager, the experimental article is sequentially reduced from the highest real treading section downwards when the speed of the unmanned vehicle is reduced to zero, the forward moving distance of the experimental article is obtained, and when the forward moving distance is less than or equal to X4, the value of X4 can be 10 cm; marking the real stepping section at the moment as a comfortable stepping section;
SS 4: synchronously performing lane change analysis, determining that a road section which can be currently steered is automatically acquired when next turning is required according to a path in the road section model and the path, and marking the road section as an allowable road section;
SS 5: when the difference gain ratio variation range in front of the current road section is lower than a preset range, the preset range is set by a manager, and when the real-time speed of the unmanned automobile is lower than seventy percent of the upper limit speed, an observation signal is automatically generated;
SS 6: when an observation signal is generated, automatically acquiring vehicles in front of all allowed road sections, according to pixel point conversion distances, marking the distance between a barrier nearest to the front and a current unmanned vehicle as a spacing distance, when the spacing distance exceeds a threshold distance, taking the allowed road section with the largest spacing distance in the allowed road sections, marking the allowed road section with the largest spacing distance as a road section for changing when the current unmanned vehicle has no vehicle in the same trip and the distance between a rear vehicle and the current unmanned vehicle exceeds a second threshold distance;
the threshold distance and the second threshold distance are numerical values preset by a manager;
SS 7: and changing the driverless automobile to a road changing section.
CN202210703298.7A 2022-06-21 2022-06-21 Road learning method for unmanned automobile Active CN115097827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210703298.7A CN115097827B (en) 2022-06-21 2022-06-21 Road learning method for unmanned automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210703298.7A CN115097827B (en) 2022-06-21 2022-06-21 Road learning method for unmanned automobile

Publications (2)

Publication Number Publication Date
CN115097827A true CN115097827A (en) 2022-09-23
CN115097827B CN115097827B (en) 2023-02-10

Family

ID=83292643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210703298.7A Active CN115097827B (en) 2022-06-21 2022-06-21 Road learning method for unmanned automobile

Country Status (1)

Country Link
CN (1) CN115097827B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108693892A (en) * 2018-04-20 2018-10-23 深圳臻迪信息技术有限公司 A kind of tracking, electronic device
CN109118760A (en) * 2018-08-16 2019-01-01 山东省科学院自动化研究所 Automatic driving vehicle traffic sign vision-based detection and response integrated test system and method
CN109991976A (en) * 2019-03-01 2019-07-09 江苏理工学院 A method of the unmanned vehicle based on standard particle group's algorithm evades dynamic vehicle
CN110740911A (en) * 2017-06-26 2020-01-31 日产自动车株式会社 Vehicle driving support method and driving support device
CN110914127A (en) * 2017-07-27 2020-03-24 日产自动车株式会社 Driving assistance method and driving assistance device
CN110941275A (en) * 2019-12-06 2020-03-31 格物汽车科技(苏州)有限公司 Data processing method for automatic driving of vehicle
CN112162555A (en) * 2020-09-23 2021-01-01 燕山大学 Vehicle control method based on reinforcement learning control strategy in hybrid vehicle fleet
CN112651374A (en) * 2021-01-04 2021-04-13 东风汽车股份有限公司 Future trajectory prediction method based on social information and automatic driving system
KR20220027327A (en) * 2020-08-26 2022-03-08 현대모비스 주식회사 Method And Apparatus for Controlling Terrain Mode Using Road Condition Judgment Model Based on Deep Learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110740911A (en) * 2017-06-26 2020-01-31 日产自动车株式会社 Vehicle driving support method and driving support device
CN110914127A (en) * 2017-07-27 2020-03-24 日产自动车株式会社 Driving assistance method and driving assistance device
CN108693892A (en) * 2018-04-20 2018-10-23 深圳臻迪信息技术有限公司 A kind of tracking, electronic device
CN109118760A (en) * 2018-08-16 2019-01-01 山东省科学院自动化研究所 Automatic driving vehicle traffic sign vision-based detection and response integrated test system and method
CN109991976A (en) * 2019-03-01 2019-07-09 江苏理工学院 A method of the unmanned vehicle based on standard particle group's algorithm evades dynamic vehicle
CN110941275A (en) * 2019-12-06 2020-03-31 格物汽车科技(苏州)有限公司 Data processing method for automatic driving of vehicle
KR20220027327A (en) * 2020-08-26 2022-03-08 현대모비스 주식회사 Method And Apparatus for Controlling Terrain Mode Using Road Condition Judgment Model Based on Deep Learning
CN112162555A (en) * 2020-09-23 2021-01-01 燕山大学 Vehicle control method based on reinforcement learning control strategy in hybrid vehicle fleet
CN112651374A (en) * 2021-01-04 2021-04-13 东风汽车股份有限公司 Future trajectory prediction method based on social information and automatic driving system

Also Published As

Publication number Publication date
CN115097827B (en) 2023-02-10

Similar Documents

Publication Publication Date Title
CN110155046B (en) Automatic emergency braking hierarchical control method and system
CN111383474B (en) Decision making system and method for automatically driving vehicle
CN109727469B (en) Comprehensive risk degree evaluation method for automatically driven vehicles under multiple lanes
US9140565B2 (en) Travel plan generation method and travel plan generation device
JP4329088B2 (en) Vehicle control device
CN110488802A (en) A kind of automatic driving vehicle dynamic behaviour decision-making technique netted under connection environment
CN111845754B (en) Decision prediction method of automatic driving vehicle based on edge calculation and crowd-sourcing algorithm
CN112562326A (en) Vehicle speed guiding method, server and readable storage medium
CN110077398B (en) Risk handling method for intelligent driving
WO2019220717A1 (en) Vehicle control device
CN110444015B (en) Intelligent network-connected automobile speed decision method based on no-signal intersection partition
CN113470407B (en) Vehicle speed guiding method for multi-intersection passing, server and readable storage medium
CN113428180A (en) Method, system and terminal for controlling single-lane running speed of unmanned vehicle
CN114506342B (en) Automatic driving lane change decision method, system and vehicle
CN115662131A (en) Multi-lane cooperative lane changing method for road accident section in networking environment
JP3985595B2 (en) Driving control device for automobile
CN111717213A (en) Cruise control method and device for automatic driving vehicle
EP4299400A1 (en) Method and system for controlling an autonomous vehicle and autonomous vehicle
CN115097827B (en) Road learning method for unmanned automobile
CN113870581B (en) Control method for driving unmanned vehicle into road
CN116142231A (en) Multi-factor-considered longitudinal control method and system for automatic driving vehicle
CN116013078A (en) Dynamic control method for merging main line of ramp vehicles in rapid transit merging area
CN112373482B (en) Driving habit modeling method based on driving simulator
CN109712425B (en) Method and device for determining bus position based on sparse positioning point
CN114179804B (en) Vehicle braking energy recovery method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant