CN111497847B - Vehicle control method and device - Google Patents
Vehicle control method and device Download PDFInfo
- Publication number
- CN111497847B CN111497847B CN202010329249.2A CN202010329249A CN111497847B CN 111497847 B CN111497847 B CN 111497847B CN 202010329249 A CN202010329249 A CN 202010329249A CN 111497847 B CN111497847 B CN 111497847B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- labeling
- result
- target
- control strategy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000011217 control strategy Methods 0.000 claims abstract description 93
- 238000002372 labelling Methods 0.000 claims description 139
- 239000003086 colorant Substances 0.000 claims description 14
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 230000006399 behavior Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/10—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a control method and a control device for a vehicle. Wherein, the method comprises the following steps: collecting the running state parameters of a target vehicle, wherein the target vehicle runs according to a preset planned route, and the planned route comprises the following steps: longitudinal planning and transverse planning; judging whether to change the transverse planning of the target vehicle based on a target control strategy according to the driving state parameters, wherein the target control strategy is selected from a plurality of preset candidate control strategies; and continuing to control the target vehicle to run according to the judgment result. The invention solves the technical problem of lower automation degree of the unmanned automobile in the prior art.
Description
Technical Field
The invention relates to the field of vehicle control, in particular to a vehicle control method and device.
Background
The unmanned automobile is an intelligent automobile which senses road environment through a vehicle-mounted sensing system, automatically plans a vehicle route and controls the vehicle to reach a preset destination. However, the existing unmanned automobile can only realize following driving, and overtaking control is difficult to realize, so that the problem that the unmanned automobile still has low automation degree is caused.
Aiming at the problem that the degree of automation of an unmanned automobile is low in the prior art, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a vehicle control method and device, which at least solve the technical problem of low automation degree of an unmanned vehicle in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a control method of a vehicle, including: acquiring a driving state parameter of a target vehicle, wherein the target vehicle drives according to a preset planned route, and the planned route comprises: longitudinal planning and transverse planning; judging whether to change the transverse plan of the target vehicle based on a target control strategy according to the driving state parameters, wherein the target control strategy is selected from a plurality of preset candidate control strategies; and continuously controlling the target vehicle to run according to the judgment result.
Further, the judgment result includes: no overtaking, left overtaking and right overtaking.
Further, the driving state parameters include: the distance between the target vehicle and other vehicles, the distance between the target vehicle and surrounding objects and the time of arrival at the destination.
Further, the method further comprises: obtaining the target control strategy, wherein the step of obtaining the target control strategy comprises: acquiring sample scene information, wherein the sample scene information comprises a sample vehicle and a driving state parameter of the sample vehicle; judging whether the transverse planning of the sample vehicle is changed or not according to the state parameters of the sample vehicle by using a preset candidate control strategy to obtain a judgment result corresponding to the candidate control strategy; marking whether the sample vehicle in the sample scene information changes the transverse plan or not to obtain a marking result; and selecting the target control strategy from the candidate control strategies according to the labeling result and the judgment result.
Further, selecting the target control strategy from the candidate control strategies according to the labeling result and the judgment result includes: respectively determining the matching degree of the judgment result of each candidate control strategy and the labeling result; and determining the candidate control strategy with the highest matching degree as the target control strategy.
Further, the sample scene information includes video information about the running of the sample vehicle, and whether the sample vehicle in the sample scene information changes the transverse plan is labeled to obtain a labeling result, including: acquiring an initial labeling result, wherein a plurality of labeling objects label each frame of the video information to obtain the initial labeling result; and counting the initial labeling result to obtain a final labeling result.
Further, counting the initial labeling result to obtain a final labeling result, including: acquiring continuous unit quantity image frames in the video information as unit segments, and determining the labeling result of the unit segments according to the preliminary labeling result of each frame image in the unit segments; setting different colors for the unit segments according to the time sequence of the unit segments and the labeling results of the unit segments to obtain a color sequence corresponding to each labeled object; and determining the final labeling result according to the color sequence corresponding to the plurality of labeling objects.
Further, determining a final labeling result according to the color sequence of the plurality of labeled objects includes: determining a final labeling result of each unit segment, wherein in a plurality of color sequences of a plurality of labeling objects, a labeling result represented by a plurality of colors corresponding to the same unit segment is the final labeling result of the unit segment; and connecting a plurality of unit segments according to a time sequence to obtain a final labeling result.
Further, before determining the final labeling result of each unit segment, the method further includes: acquiring the number of offset unit segments in the color sequence of each labeling object, wherein the offset unit segments are used for expressing the unit segments with the colors different from the unit segments with the largest number of the same unit segments in other color sequences; if the ratio of the number of the shifted unit segments to the number of the unit segments included in the color sequence is larger than a preset value, determining that the labeling object is an abnormal labeling object; and eliminating the color sequence of the abnormal labeling object from the plurality of color sequences, and then determining the final labeling result of each unit segment.
According to an aspect of an embodiment of the present invention, there is provided a control apparatus of a vehicle, including: the system comprises an acquisition module, a storage module and a control module, wherein the acquisition module is used for acquiring the running state parameters of a target vehicle, the target vehicle runs according to a preset planned route, and the planned route comprises: longitudinal planning and transverse planning; the judging module is used for judging whether to change the transverse planning of the target vehicle based on a target control strategy according to the driving state parameters, wherein the target control strategy is selected from a plurality of preset candidate control strategies; and the control module is used for continuously controlling the target vehicle to run according to the judgment result.
According to an aspect of an embodiment of the present invention, there is provided a storage medium including a stored program, wherein an apparatus in which the storage medium is located is controlled to execute the above-described control method of a vehicle when the program is executed.
According to an aspect of an embodiment of the present invention, there is provided a processor for executing a program, wherein the program executes the control method of the vehicle described above.
In the embodiment of the present invention, a driving state parameter of a target vehicle is collected, wherein the target vehicle drives according to a preset planned route, and the planned route includes: longitudinal planning and transverse planning; judging whether to change the transverse plan of the target vehicle based on a target control strategy according to the driving state parameters, wherein the target control strategy is selected from a plurality of preset candidate control strategies; and continuously controlling the target vehicle to run according to the judgment result. According to the scheme, whether the target vehicle changes the transverse plan or not is judged through the target control strategy selected from the candidate control strategies, so that whether the target vehicle is subjected to overtaking control or not can be determined, and the technical problem that the degree of automation of the unmanned automobile is low in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of controlling a vehicle according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a color sequence according to an embodiment of the present application; and
fig. 3 is a schematic diagram of a control apparatus of a vehicle according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a control method for a vehicle, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flowchart of a control method of a vehicle according to an embodiment of the present application, as shown in fig. 1, the method including the steps of:
step S102, collecting the running state parameters of a target vehicle, wherein the target vehicle runs according to a preset planned route, and the planned route comprises the following steps: longitudinal planning and transverse planning.
Specifically, the target vehicle may be a vehicle to be controlled, the target vehicle may be an unmanned vehicle, or a vehicle that may be automatically assist-controlled on the basis of driver control.
The preset planned route is used to represent a driving route set for the target vehicle in advance, and the planned route may be automatically set for the target vehicle after setting the departure place and the destination. The planned route includes a longitudinal plan and a transverse plan, wherein the longitudinal plan is used for representing the planned route of the vehicle driving direction, such as: forward driving, turning driving and the like belong to longitudinal planning; lateral planning is used to represent a planned route in a direction perpendicular to the direction of travel of the vehicle, such as: lane changing, overtaking, etc.
Before the target vehicle runs, an initial planned route can be determined according to a departure place and a destination and by combining with a local map, wherein the longitudinal planning in the planned route is used for driving the vehicle to run to the destination, and the transverse planning is used for controlling the vehicle to carry out necessary lane change.
However, when the target vehicle travels on the road surface, the target vehicle also includes other vehicles, and the traveling states of the other vehicles are difficult to predict, so that in the process of traveling the target vehicle to the destination according to the planned route, the planned route of the target vehicle needs to be adjusted according to the traveling states of the other vehicles on the road surface and the sudden state of the road surface, so that the target vehicle can timely cope with the actual situation of the road surface.
And step S104, judging whether to change the transverse planning of the target vehicle based on a target control strategy according to the driving state parameters, wherein the target control strategy is selected from a plurality of preset control strategies.
Specifically, the judgment result may include: no overtaking, left overtaking and right overtaking.
In an alternative embodiment, the driving state parameters include: the distance between the target vehicle and other vehicles, the distance between the target vehicle and surrounding objects, and the time to reach the destination. The inter-vehicle distance between the target vehicle and another vehicle is a distance between the target vehicle and another vehicle closest to the target vehicle in a predetermined direction, and examples thereof include a distance between the target vehicle and a preceding vehicle, a distance between the target vehicle and a left vehicle, and a distance between the target vehicle and a right vehicle.
In the above example, a plurality of candidate control strategies may be set in advance, and then an optimal target control strategy is selected from the candidate control strategies, and the optimal target control strategy is used to determine whether to change the lateral plan of the target vehicle. The candidate control strategies may be: the distance between the target vehicle and the vehicle on the left side is larger than a first preset value, the distance between the target vehicle and the vehicle in front is larger than a second preset value, and the remaining time is determined to be smaller than the preset time according to the time for reaching the destination, so that the judgment result can be that the left side overtakes. The candidate control strategies may also be: the distance between the target vehicle and the vehicle on the right side is larger than a first preset value, the distance between the target vehicle and the vehicle in front is larger than a second preset value, and the remaining time is determined to be smaller than the preset time according to the time for reaching the destination, so that the judgment result can be that the vehicle overtakes on the right side. The candidate control strategies may also be: and determining that the remaining time is greater than the preset time according to the time for reaching the destination, and judging that the vehicle does not overtake.
The candidate control strategies can also be a plurality of neural network models with different network parameters, the input of the neural network models is the driving state parameter, and the output of the neural network models is the judgment result.
And step S106, continuing to control the target vehicle to run according to the judgment result.
After the judgment result is obtained, a corresponding instruction can be sent to the target vehicle according to the judgment result, so that the target vehicle can adjust the planned route to run.
As can be seen from the above, in the above embodiments of the present application, the driving state parameters of the target vehicle are collected, wherein the target vehicle drives according to a preset planned route, and the planned route includes: longitudinal planning and transverse planning; judging whether to change the transverse planning of the target vehicle based on a target control strategy according to the driving state parameters, wherein the target control strategy is selected from a plurality of preset candidate control strategies; and continuing to control the target vehicle to run according to the judgment result. According to the scheme, whether the target vehicle changes the transverse plan or not is judged through the target control strategy selected from the candidate control strategies, so that whether the target vehicle is subjected to overtaking control or not can be determined, and the technical problem that the degree of automation of the unmanned automobile is low in the prior art is solved.
As an alternative embodiment, the method further comprises: obtaining a target control strategy, wherein the step of obtaining the target control strategy comprises the following steps: obtaining sample scene information, wherein the sample scene information comprises sample vehicles and sample vehicle running state parameters; judging whether the transverse planning of the sample vehicle is changed or not according to the state parameters of the sample vehicle by using a preset candidate control strategy to obtain a judgment result corresponding to the candidate control strategy; marking whether the sample vehicle in the sample scene information changes the transverse plan or not to obtain a marking result; and selecting a target control strategy from the candidate control strategies according to the labeling result and the judgment result.
Specifically, the sample scene information includes at least one sample vehicle, and the sample scene information may include point cloud data detected by a radar device disposed on the sample vehicle. And (5) framing the vehicles and the objects on the point cloud data, so that the driving state parameters of the sample vehicle can be obtained.
According to the driving state parameters of the sample vehicle, a plurality of judgment results can be obtained through a plurality of candidate control strategies. Since the running state parameters of the sample vehicle when the sample vehicle runs to different positions are different, the judgment result of the sample vehicle at a certain time can be obtained according to the running state parameters of the sample vehicle at the certain time.
Whether the sample vehicle in the scene information changes the transverse planning or not is labeled, and labeling can be performed by a labeling engineer, and different driving behaviors can be provided even if facing the same situation when different people drive, so that the scheme can select a plurality of labeling engineers to label whether the sample vehicle changes the transverse planning or not, and the labeling result can include: no change, left side change and right side change, the correspondence is: no overtaking, left overtaking and right overtaking.
After the labeling results of the plurality of annotators are obtained, the labeling results of the plurality of annotators can be counted to obtain the most appropriate judgment result, and then the target control strategy is selected from the plurality of candidate control strategies based on the most appropriate judgment result obtained through counting.
As an alternative embodiment, selecting a target control strategy from the candidate control strategies according to the labeling result and the judgment result includes: respectively determining the matching degree of the judgment result and the labeling result of each candidate control strategy; and determining the control strategy with the highest matching degree as the target control strategy.
The matching degree can be determined according to the judgment result of the candidate control strategy on the sample vehicle and the labeling result of the labeling operator on the sample vehicle at the same moment.
In an optional embodiment, taking 150 frames of images recorded in the sample scene information as an example, extraction may be performed every 10 frames, 15 frames of images are extracted in total, and the judgment result output by the candidate control strategy corresponding to the sample vehicle in the 15 frames of images is matched with the annotation result corresponding to the annotator, so as to determine the optimal judgment result. For example: for 15 frames of images, the judgment result of 14 frames of images of the candidate control strategy A is the same as the labeling result of the annotator, the judgment result of only 12 frames of images of the candidate control strategy B is the same as the labeling result of the annotator, the matching degree of the judgment result and the labeling result of the candidate control strategy A is higher than that of the candidate control strategy B, and if no other candidate control strategy higher than the matching degree corresponding to the candidate control strategy A exists in the candidate control strategies, the candidate control strategy A can be determined as the target candidate control strategy.
As an optional embodiment, the sample scene information includes video information about the running of the sample vehicle, and the labeling is performed on whether the sample vehicle in the sample scene information changes the horizontal plan, so as to obtain a labeling result, including: acquiring an initial labeling result, wherein a plurality of labeling objects label each frame of image of video information whether a sample vehicle changes a transverse plan to obtain the initial labeling result; and counting the initial labeling result to obtain a final labeling result.
In the above scheme, the sample scene information provided to the annotator may be video information of the running of the sample vehicle, and the annotator may annotate each frame in the video information to obtain an initial annotation result.
The initial annotation result is counted, that is, the most initial annotation result in each frame is counted as the final annotation result of the frame of video. In an optional embodiment, taking 150 frames of video as an example, determining an initial annotation result with the most labels for each frame in the initial annotation results of the annotators (for example, for a first frame of video, if the initial annotation results of 8 annotators in 10 annotators are unchanged, and if the initial annotation results of two annotators are changed on the left side, determining that the final annotation result of the frame of video is unchanged), taking the result as the final annotation result of the frame, and arranging according to the time sequence of the 150 frames of video, the final annotation result of the sample scene information can be obtained.
The statistical process is used for obtaining the commonality of different annotators to determine the most appropriate control mode of the sample vehicle for each frame, and the most appropriate control mode is used as the final annotation result. According to the scheme, different people have different behaviors when driving the vehicle, so that the most appropriate control mode for the sample vehicle is obtained by acquiring the initial labeling results of a plurality of annotators and carrying out statistics according to the initial labeling results of the plurality of annotators.
As an optional embodiment, counting the initial labeling result to obtain a final labeling result, includes: acquiring continuous unit quantity image frames in the video information as unit segments, and determining the labeling result of the unit segments according to the preliminary labeling result of each frame image in the unit segments; setting different colors for the unit segments according to the time sequence of the unit segments and the labeling results of the unit segments to obtain a color sequence corresponding to each labeled object; and determining a final labeling result according to the color sequence of the plurality of labeling objects.
Since the time occupied by one frame of video is too short, and the difference between two adjacent frames of video is small, in the above scheme, a unit segment formed by multiple frames of video is taken as a statistical unit, and still taking 150 frames of video information as an example, each unit segment can be set to include 10 frames of video. The labeling result of a unit segment is determined according to the water outlet labeling result of each frame in the unit segment, and the labeling result which appears most in the unit segment can be used as the labeling result of the unit which is shortened.
For the convenience of visual display, different colors can be displayed for the unit segments according to the labeling results of the unit segments, so as to obtain the color sequence corresponding to each labeled object. Fig. 2 is a schematic diagram of a color sequence according to an embodiment of the present application, and in this example, still taking 150 frames as an example, as shown in fig. 2, each 10 frames constitute a unit segment, where a blank, a left-oblique-line filling and a right-oblique-line filling respectively represent different colors, the blank represents no change, the left-oblique line represents a left change, and the right-oblique line represents a right change. The 15 unit segments are arranged according to a time sequence, and a group of color sequences can be obtained. Similarly, the labeling result of each labeled object can obtain the color sequence corresponding to each object in the same way.
As an alternative embodiment, determining the final labeling result according to the color sequence of the plurality of labeled objects includes: determining a final labeling result of each unit segment, wherein in a plurality of color sequences of a plurality of labeling objects, the labeling result represented by a plurality of colors corresponding to the same unit segment is the final labeling result of the unit segment; and connecting a plurality of unit segments according to the time sequence to obtain a final labeling result.
In the above scheme, each labeling object generates a corresponding color sequence, and because different people have different driving behaviors when driving a vehicle, the corresponding colors in different color sequences in the same time period are not necessarily the same, and therefore, the labeling results of multiple annotators need to be counted according to multiple color sequences, so as to obtain the most appropriate driving behavior for each unit segment.
In an alternative embodiment, for a unit segment, the labeling result with the most occurrence can be selected as the final labeling result corresponding to the unit segment pair. For example, for the first unit segment, in the 10 sets of color sequences, 8 color sequences are the first color, and 2 color sequences are the second color, so the labeling result corresponding to the first color is selected as the final labeling result of the first unit segment.
And after the final labeling result of each unit segment is obtained, connecting each unit segment according to the time sequence to obtain the final labeling result corresponding to the sample scene information.
As an alternative embodiment, before determining the final labeling result of each unit segment, the method further comprises: acquiring the number of offset unit segments in the color sequence of each labeling object, wherein the offset unit segments are used for expressing unit segments with different colors from the same unit segment in other color sequences; if the ratio of the number of the shifted unit segments to the number of the unit segments included in the color sequence is larger than a preset value, determining that the labeling object is an abnormal labeling object; and eliminating the color sequence of the abnormal labeling object from the plurality of color sequences, and then determining the final labeling result of each unit segment.
It should be noted that there is a case where the individual labeling object does not drive or experiences less driving, resulting in a larger deviation of the labeling result. In the scheme, the driving experience of each annotation object is not necessarily rich, so that inaccurate annotation of the annotation object without driving or with less driving experience has influence on the overall annotation object, and therefore the annotation data of the inaccurate annotation object needs to be removed.
For example, still take 150 frames of video as an example, where each 10 frames constitute one unit segment, that is, there are 15 unit segments in each color sequence of the annotation object. Taking 10 labeled objects as an example, if the first unit segment of the 10 color sequences corresponding to the 10 labeled objects is white and 8, the first unit segment that is not white is the offset segment. And finding the number of the offset segments in each color sequence by using the method, solving the ratio of the number to 15, and rejecting the corresponding color sequence if the ratio is greater than a preset value.
Example 2
According to an embodiment of the present invention, there is provided an embodiment of a control apparatus for a vehicle, and fig. 3 is a schematic view of a control apparatus for a vehicle according to an embodiment of the present application, as shown in fig. 3, the apparatus including:
the collecting module 30 is configured to collect driving state parameters of a target vehicle, where the target vehicle drives according to a preset planned route, and the planned route includes: longitudinal planning and transverse planning;
a judging module 32, configured to judge whether to change the transverse plan of the target vehicle based on a target control strategy according to the driving state parameter, where the target control strategy is selected from a plurality of preset candidate control strategies;
and the control module 34 is used for continuously controlling the target vehicle to run according to the judgment result.
As an alternative embodiment, the determination result includes: no overtaking, left overtaking and right overtaking.
As an alternative embodiment, the routine driving state parameters include: the distance between the target vehicle and other vehicles, the distance between the target vehicle and surrounding objects, and the time to reach the destination.
As an alternative embodiment, the apparatus further comprises: a first obtaining module, configured to obtain a target control policy, where the first obtaining module includes: the acquisition submodule is used for acquiring sample scene information, wherein the sample scene information comprises a sample vehicle and a running state parameter of the sample vehicle; the judging submodule is used for judging whether the transverse planning of the sample vehicle is changed or not according to the state parameters of the sample vehicle by using a preset candidate control strategy to obtain a judgment result corresponding to the candidate control strategy; the marking submodule is used for marking whether the sample vehicle in the sample scene information changes the transverse plan or not to obtain a marking result; and the selection submodule is used for selecting a target control strategy from the candidate control strategies according to the labeling result and the judgment result.
As an alternative embodiment, the selection submodule comprises: the first determining unit is used for respectively determining the matching degree of the judgment result and the labeling result of each candidate control strategy; and the second determining unit is used for determining the candidate control strategy with the highest matching degree as the target control strategy.
As an alternative embodiment, the sample scene information includes video information of the running of the sample vehicle, and the labeling submodule includes: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an initial labeling result, and a plurality of labeling objects label each frame of video information to obtain the initial labeling result; and the statistical unit is used for counting the initial labeling result to obtain a final labeling result.
As an alternative embodiment, the statistical unit comprises: the acquisition subunit is used for acquiring continuous unit number image frames in the video information as unit segments, and determining the labeling result of the unit segments according to the preliminary labeling result of each frame image in the unit segments; the setting subunit is used for setting different colors for the unit segments according to the time sequence of the unit segments and the labeling results of the unit segments to obtain a color sequence corresponding to each labeled object; and the first determining subunit is used for determining a final labeling result according to the color sequence corresponding to the plurality of labeling objects.
As an alternative embodiment, the first determining subunit includes: a second determining subunit, configured to determine a final labeling result of each unit segment, where, in a plurality of color sequences of the plurality of labeling objects, a labeling result represented by a larger number of colors corresponding to the same unit segment is the final labeling result of the unit segment; and the connecting subunit is used for connecting the plurality of unit segments according to the time sequence to obtain a final labeling result.
As an alternative embodiment, the apparatus further comprises: a second obtaining module, configured to obtain, before determining a final labeling result of each unit segment, the number of offset unit segments in the color sequence of each labeling object, where the offset unit segments are used to indicate a unit segment whose color is different from a color with the largest number of the same unit segments in other color sequences; the determining module is used for determining the labeling object as an abnormal labeling object if the ratio of the number of the shifted unit segments to the number of the unit segments included in the color sequence is greater than a preset value; and the removing module is used for removing the color sequence of the abnormal labeling object from the plurality of color sequences and then entering the step of determining the final labeling result of each unit segment.
Example 3
According to an embodiment of the present invention, there is provided a storage medium including a stored program, wherein a device in which the storage medium is controlled when the program is executed performs the control method of the vehicle according to any one of embodiments 1.
Example 4
According to an embodiment of the present invention, there is provided a processor for running a program, wherein the program is run to execute the control method of the vehicle according to any one of embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (11)
1. A control method of a vehicle, characterized by comprising:
acquiring a driving state parameter of a target vehicle, wherein the target vehicle drives according to a preset planned route, and the planned route comprises: longitudinal planning and transverse planning;
according to the driving state parameters, judging whether to change the transverse planning of the target vehicle based on a target control strategy, wherein the target control strategy is selected from a plurality of preset candidate control strategies, and the driving state parameters comprise: the distance between the target vehicle and other vehicles, the distance between the target vehicle and surrounding objects and the time for the target vehicle to reach the destination;
and continuously controlling the target vehicle to run according to the judgment result.
2. The method of claim 1, wherein the determination comprises: no overtaking, left overtaking and right overtaking.
3. The method of claim 1, further comprising: obtaining the target control strategy, wherein the step of obtaining the target control strategy comprises:
acquiring sample scene information, wherein the sample scene information comprises a sample vehicle and a driving state parameter of the sample vehicle;
judging whether the transverse planning of the sample vehicle is changed or not according to the state parameters of the sample vehicle by using a preset candidate control strategy to obtain a judgment result corresponding to the candidate control strategy;
marking whether the sample vehicle in the sample scene information changes the transverse plan or not to obtain a marking result;
and selecting the target control strategy from the candidate control strategies according to the labeling result and the judgment result.
4. The method of claim 3, wherein selecting the target control strategy from the candidate control strategies according to the labeling result and the determination result comprises:
respectively determining the matching degree of the judgment result of each candidate control strategy and the labeling result;
and determining the candidate control strategy with the highest matching degree as the target control strategy.
5. The method of claim 3, wherein the sample scene information includes video information of the running of the sample vehicle, and the labeling of whether the sample vehicle in the sample scene information changes the horizontal plan is performed to obtain a labeling result, including:
acquiring an initial labeling result, wherein a plurality of labeling objects label each frame of the video information to obtain the initial labeling result;
and counting the initial labeling result to obtain a final labeling result.
6. The method of claim 5, wherein the step of counting the initial labeling result to obtain a final labeling result comprises:
acquiring continuous unit quantity image frames in the video information as unit segments, and determining the labeling result of the unit segments according to the preliminary labeling result of each frame image in the unit segments;
setting different colors for the unit segments according to the time sequence of the unit segments and the labeling results of the unit segments to obtain a color sequence corresponding to each labeled object;
and determining the final labeling result according to the color sequence corresponding to the plurality of labeling objects.
7. The method of claim 6, wherein determining the final labeling result according to the color sequence corresponding to the plurality of labeled objects comprises:
determining a final labeling result of each unit segment, wherein in a plurality of color sequences of a plurality of labeling objects, a labeling result represented by a plurality of colors corresponding to the same unit segment is the final labeling result of the unit segment;
and connecting a plurality of unit segments according to a time sequence to obtain a final labeling result.
8. The method of claim 7, wherein prior to determining a final labeling result for each unit segment, the method further comprises:
acquiring the number of offset unit segments in the color sequence of each labeling object, wherein the offset unit segments are used for expressing the unit segments with the colors different from the unit segments with the largest number of the same unit segments in other color sequences;
if the ratio of the number of the shifted unit segments to the number of the unit segments included in the color sequence is larger than a preset value, determining that the labeling object is an abnormal labeling object;
and eliminating the color sequence of the abnormal labeling object from the plurality of color sequences, and then determining the final labeling result of each unit segment.
9. A control apparatus of a vehicle, characterized by comprising:
the system comprises an acquisition module, a storage module and a control module, wherein the acquisition module is used for acquiring the running state parameters of a target vehicle, the target vehicle runs according to a preset planned route, and the planned route comprises: longitudinal planning and transverse planning;
a judging module, configured to judge whether to change a lateral plan of the target vehicle based on a target control strategy according to the driving state parameter, where the target control strategy is selected from a plurality of preset candidate control strategies, and the driving state parameter includes: the distance between the target vehicle and other vehicles, the distance between the target vehicle and surrounding objects and the time for the target vehicle to reach the destination;
and the control module is used for continuously controlling the target vehicle to run according to the judgment result.
10. A storage medium characterized by comprising a stored program, wherein a device in which the storage medium is located is controlled to execute the control method of the vehicle according to any one of claims 1 to 8 when the program is executed.
11. A processor, characterized in that the processor is configured to run a program, wherein the program is executed to execute the control method of the vehicle according to any one of claims 1 to 8 when running.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010329249.2A CN111497847B (en) | 2020-04-23 | 2020-04-23 | Vehicle control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010329249.2A CN111497847B (en) | 2020-04-23 | 2020-04-23 | Vehicle control method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111497847A CN111497847A (en) | 2020-08-07 |
CN111497847B true CN111497847B (en) | 2021-11-16 |
Family
ID=71867695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010329249.2A Active CN111497847B (en) | 2020-04-23 | 2020-04-23 | Vehicle control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111497847B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103935361A (en) * | 2013-01-21 | 2014-07-23 | 通用汽车环球科技运作有限责任公司 | Efficient data flow algorithms for autonomous lane changing, passing and overtaking behaviors |
CN105564432A (en) * | 2014-11-04 | 2016-05-11 | 沃尔沃汽车公司 | Method and system for assisting overtaking |
CN106218637A (en) * | 2016-08-08 | 2016-12-14 | 合肥泰好乐电子科技有限公司 | A kind of automatic Pilot method |
CN106564498A (en) * | 2015-10-07 | 2017-04-19 | Trw有限公司 | Vehicle safety system |
CN107792065A (en) * | 2016-08-29 | 2018-03-13 | 沃尔沃汽车公司 | The method of road vehicle trajectory planning |
CN108068815A (en) * | 2016-11-14 | 2018-05-25 | 百度(美国)有限责任公司 | System is improved for the decision-making based on planning feedback of automatic driving vehicle |
CN108216237A (en) * | 2016-12-16 | 2018-06-29 | 现代自动车株式会社 | For controlling the device and method of the autonomous driving of vehicle |
CN108573242A (en) * | 2018-04-26 | 2018-09-25 | 南京行车宝智能科技有限公司 | A kind of method for detecting lane lines and device |
CN109017786A (en) * | 2018-08-09 | 2018-12-18 | 北京智行者科技有限公司 | Vehicle obstacle-avoidance method |
CN109703569A (en) * | 2019-02-21 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | A kind of information processing method, device and storage medium |
CN109753975A (en) * | 2019-02-02 | 2019-05-14 | 杭州睿琪软件有限公司 | Training sample obtaining method and device, electronic equipment and storage medium |
CN109849917A (en) * | 2019-03-07 | 2019-06-07 | 深圳鸿鹏新能源科技有限公司 | Control method, system and the vehicle of vehicle |
CN110325935A (en) * | 2017-09-18 | 2019-10-11 | 百度时代网络技术(北京)有限公司 | The lane guide line based on Driving Scene of path planning for automatic driving vehicle |
CN110550030A (en) * | 2019-09-09 | 2019-12-10 | 深圳一清创新科技有限公司 | Lane changing control method and device for unmanned vehicle, computer equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6237694B2 (en) * | 2015-04-28 | 2017-11-29 | トヨタ自動車株式会社 | Travel control device |
CN108460968A (en) * | 2017-02-22 | 2018-08-28 | 中兴通讯股份有限公司 | A kind of method and device obtaining traffic information based on car networking |
-
2020
- 2020-04-23 CN CN202010329249.2A patent/CN111497847B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103935361A (en) * | 2013-01-21 | 2014-07-23 | 通用汽车环球科技运作有限责任公司 | Efficient data flow algorithms for autonomous lane changing, passing and overtaking behaviors |
CN105564432A (en) * | 2014-11-04 | 2016-05-11 | 沃尔沃汽车公司 | Method and system for assisting overtaking |
CN106564498A (en) * | 2015-10-07 | 2017-04-19 | Trw有限公司 | Vehicle safety system |
CN106218637A (en) * | 2016-08-08 | 2016-12-14 | 合肥泰好乐电子科技有限公司 | A kind of automatic Pilot method |
CN107792065A (en) * | 2016-08-29 | 2018-03-13 | 沃尔沃汽车公司 | The method of road vehicle trajectory planning |
CN108068815A (en) * | 2016-11-14 | 2018-05-25 | 百度(美国)有限责任公司 | System is improved for the decision-making based on planning feedback of automatic driving vehicle |
CN108216237A (en) * | 2016-12-16 | 2018-06-29 | 现代自动车株式会社 | For controlling the device and method of the autonomous driving of vehicle |
CN110325935A (en) * | 2017-09-18 | 2019-10-11 | 百度时代网络技术(北京)有限公司 | The lane guide line based on Driving Scene of path planning for automatic driving vehicle |
CN108573242A (en) * | 2018-04-26 | 2018-09-25 | 南京行车宝智能科技有限公司 | A kind of method for detecting lane lines and device |
CN109017786A (en) * | 2018-08-09 | 2018-12-18 | 北京智行者科技有限公司 | Vehicle obstacle-avoidance method |
CN109753975A (en) * | 2019-02-02 | 2019-05-14 | 杭州睿琪软件有限公司 | Training sample obtaining method and device, electronic equipment and storage medium |
CN109703569A (en) * | 2019-02-21 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | A kind of information processing method, device and storage medium |
CN109849917A (en) * | 2019-03-07 | 2019-06-07 | 深圳鸿鹏新能源科技有限公司 | Control method, system and the vehicle of vehicle |
CN110550030A (en) * | 2019-09-09 | 2019-12-10 | 深圳一清创新科技有限公司 | Lane changing control method and device for unmanned vehicle, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111497847A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111192311B (en) | Automatic extraction method and device for longitudinal deceleration marked line in high-precision map making | |
US20170344855A1 (en) | Method of predicting traffic collisions and system thereof | |
JP7220169B2 (en) | Information processing method, device, storage medium, and program | |
CN111899515B (en) | Vehicle detection system based on wisdom road edge calculates gateway | |
CN111033589A (en) | Lane information management method, travel control method, and lane information management device | |
CN111222522B (en) | Neural network training, road surface detection and intelligent driving control method and device | |
CN104875740B (en) | For managing the method for following space, main vehicle and following space management unit | |
CN113085894A (en) | Vehicle control method and device and automatic driving vehicle | |
CN111429512B (en) | Image processing method and device, storage medium and processor | |
CN113071523A (en) | Control method and control device for unmanned vehicle and unmanned vehicle | |
CN113147793A (en) | Electronic map updating method and device and automatic driving vehicle | |
CN111497847B (en) | Vehicle control method and device | |
CN109635719A (en) | A kind of image-recognizing method, device and computer readable storage medium | |
CN114140396A (en) | Road surface damage detection method, system, device and medium based on unmanned aerial vehicle image | |
CN113276859A (en) | Vehicle control method, vehicle control device, computer equipment and storage medium | |
CN110696828B (en) | Forward target selection method and device and vehicle-mounted equipment | |
CN116576872A (en) | Route planning method, device, equipment and medium for unmanned cleaning vehicle | |
CN114547403B (en) | Method, device, equipment and storage medium for collecting variable-track scene | |
CN115984824A (en) | Scene information screening method based on track information, electronic equipment and storage medium | |
CN115071713A (en) | Method for determining time-varying road opportunity of off-ramp and electronic equipment | |
CN113085861A (en) | Control method and device for automatic driving vehicle and automatic driving vehicle | |
CN116415619A (en) | Method for extracting characteristics from traffic scene data based on graph neural network | |
CN113147792A (en) | Vehicle control method and device and automatic driving vehicle | |
CN114429622A (en) | Automatic vehicle driving method and device, vehicle and storage medium | |
CN113147791A (en) | Vehicle control method and device and automatic driving vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |