CN114802264A - Vehicle control method and device and electronic equipment - Google Patents

Vehicle control method and device and electronic equipment Download PDF

Info

Publication number
CN114802264A
CN114802264A CN202210462846.1A CN202210462846A CN114802264A CN 114802264 A CN114802264 A CN 114802264A CN 202210462846 A CN202210462846 A CN 202210462846A CN 114802264 A CN114802264 A CN 114802264A
Authority
CN
China
Prior art keywords
target
driving
scene
vehicle
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210462846.1A
Other languages
Chinese (zh)
Inventor
吕欢欢
李涵
徐海强
邵天东
李振洋
王明月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210462846.1A priority Critical patent/CN114802264A/en
Publication of CN114802264A publication Critical patent/CN114802264A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • B60W2520/105Longitudinal acceleration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2530/00Input parameters relating to vehicle conditions or values, not covered by groups B60W2510/00 or B60W2520/00
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2530/00Input parameters relating to vehicle conditions or values, not covered by groups B60W2510/00 or B60W2520/00
    • B60W2530/18Distance travelled
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle control method, a vehicle control device and electronic equipment. Wherein, the method comprises the following steps: acquiring vehicle driving data of a target vehicle, wherein the target vehicle is driven by a target object; carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data; extracting target driving characteristics of vehicle driving data based on the target scene; and determining the driving proficiency of the target object in the target scene based on the target driving characteristics. The invention solves the technical problem of lower accuracy in evaluating the driving proficiency of the driver in different driving scenes by adopting the prior art.

Description

Vehicle control method and device and electronic equipment
Technical Field
The invention relates to the field of Internet of vehicles big data and machine learning, in particular to a vehicle control method, a vehicle control device and electronic equipment.
Background
With the development of artificial intelligence, internet of vehicles and big data technology, the automobile industry gradually moves from the mechanical age to the intelligent age, and industrial technology represented by automobile manufacturing technology and information technology represented by artificial intelligence are deeply integrated, so that the technical innovation of the automobile industry is promoted. The technology of the internet of vehicles big data, which is one of the important technologies in the intelligent era, is developed vigorously with the emergence of various technologies, and the analysis of the driving behavior is an important part of the internet of vehicles big data, and has important application in the fields of safe driving reminding, vehicle insurance and the like. The driving proficiency analysis is an important component of driving behavior analysis and has wide application scenes. In the prior art, most of driving proficiency analysis focuses on directly utilizing driving behavior characteristics to perform cluster analysis, or driving proficiency evaluation is realized by comparing driving behavior characteristic data with preset expert experience data and through methods such as numerical difference size and weight distribution, and the like. Different driving proficiency degrees often exist in the same driving behavior operation under different driving scenes, and if no expert experience is used as label data, the driving proficiency degrees are difficult to reasonably evaluate, and different degrees of unexplainability may exist in evaluation results.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a vehicle control method, a vehicle control device and electronic equipment, and at least solves the technical problem that the accuracy of evaluating the driving proficiency of a driver in different driving scenes is low by adopting the prior art.
According to an aspect of an embodiment of the present invention, there is provided a vehicle control method including: acquiring vehicle driving data of a target vehicle, wherein the target vehicle is driven by a target object; carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data; extracting target driving characteristics of vehicle driving data based on the target scene; and determining the driving proficiency of the target object in the target scene based on the target driving characteristics.
Optionally, performing scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data, including: the method comprises the following steps of preprocessing vehicle running data to obtain a preprocessing result, wherein the preprocessing includes at least one of the following steps: filling null values, replacing abnormal values, sequencing time and removing outliers; and identifying the vehicle driving data based on the preprocessing result to obtain a target scene of the vehicle driving data.
Optionally, the target scene includes at least one travel scene, and the identifying of the vehicle driving data based on the preprocessing result obtains the target scene of the vehicle driving data, including: determining at least one driving cycle of the target vehicle based on the preprocessing result and the vehicle driving data, wherein the driving cycle is used for representing a driving range from ignition to flameout of the target vehicle; determining at least one target trip of the target vehicle based on a preset time interval and at least one driving cycle; and carrying out scene recognition on at least one target trip to obtain a trip scene corresponding to each target trip.
Optionally, extracting the target driving feature of the vehicle driving data based on the target scene includes: and extracting target running characteristics corresponding to each target travel in the vehicle running data based on the travel scene.
Optionally, determining the driving proficiency of the target object in the target scene based on the target driving characteristics comprises: identifying a target scene and target driving characteristics by using a first proficiency model, and determining the driving proficiency of a target object in the target scene; wherein the first proficiency model is trained from first training data, the first training data comprising: the target object driving control method comprises a first sample driving characteristic, a first sample scene and sample label information, wherein the first sample scene is a scene corresponding to the first sample driving characteristic, and the sample label information is used for describing the driving proficiency of the target object in the sample scene.
Optionally, determining the driving proficiency of the target object in the target scene based on the target driving characteristics comprises: identifying the target scene and the target driving characteristics by using the second proficiency model, and determining the driving proficiency of the target object in the target scene; wherein the second proficiency model is trained from second training data, the second training data comprising: the second sample scene is a scene corresponding to the second sample driving feature, and the sample weight values are used for describing the weight values of the second sample scene in all the sample scenes.
Optionally, the target scene comprises at least one of: the system comprises a starting scene, a driving scene and a parking and warehousing scene, wherein the driving scene comprises at least one of the following scenes: urban street roads, expressways, mountain roads, country roads; the vehicle travel data includes at least one of: the system comprises a steering lamp state, a maximum vehicle speed, an average vehicle speed, a maximum acceleration, an average acceleration, the number of steering wheel corner changes, the maximum steering wheel corner, the number of lane changes, the number of accelerations, the number of rapid turns, the number of overspeed, the driving duration and the driving mileage.
According to another aspect of the embodiments of the present invention, there is also provided a control apparatus of a target vehicle, including: the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring vehicle running data of a target vehicle, and the target vehicle is driven by a target object; the recognition module is used for carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data; the extraction module is used for extracting target running characteristics of the vehicle running data based on the target scene; and the determining module is used for determining the driving proficiency of the target object in the target scene based on the target running characteristics.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to execute the control method of the target vehicle of any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the processor of the device is controlled to execute the control method of the target vehicle according to any one of the above items when the program runs.
In the embodiment of the invention, vehicle running data of a target vehicle is obtained firstly, wherein the target vehicle is driven by a target object; secondly, carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data; then extracting target driving characteristics of the vehicle driving data based on the target scene; and finally, determining the driving proficiency of the target object in the target scene based on the target driving characteristics. It is easy to think that, the target characteristics of the vehicle driving data of the target vehicle are extracted based on the target scene, and then the driving proficiency level under the target scene is analyzed based on the target characteristics, so that the driving proficiency level evaluation accuracy of the driver under the target scene can be further improved, and the technical problem that the driving proficiency level evaluation accuracy of the driver under different driving scenes is low by adopting the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a vehicle control method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a data acquisition module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scene recognition module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a proficiency assessment module according to embodiments of the present invention;
fig. 5 is a schematic diagram of a vehicle control apparatus according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided a method embodiment of vehicle control, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a flowchart of a vehicle control method according to an embodiment of the present invention, as shown in fig. 1, including the steps of:
step S102, vehicle travel data of a target vehicle is acquired, wherein the target vehicle is driven by a target object.
The target vehicle may be a vehicle being driven by the target object.
The target object may be a driver whose driving proficiency is to be analyzed.
In an optional embodiment, in order to improve the integrity of data transmission, a vehicle-end model algorithm may be deployed at a vehicle end, and the high-frequency signal is calculated in real time through the vehicle-end model algorithm to obtain more accurate driving behavior data, so that the proficiency of the driver can be analyzed through the driving behavior data in the following.
Fig. 2 is a schematic diagram of a data acquisition module according to an embodiment of the present invention, as shown in the figure, various bus signal data in a vehicle may be acquired by a vehicle end in real time, where the signal data may include vehicle encoding data, data generation time data, real-time vehicle speed data, real-time mileage data, left and right turn signal status data, longitudinal acceleration data, lateral acceleration data, steering wheel rotation angle data, steering wheel rotation speed data, position information data, and the like, and the acquired data is transmitted to a cloud end through a vehicle networking system (also called a vehicle-mounted T-box device) via a network. The cloud is responsible for receiving the uploaded internet of vehicles data in real time and storing the internet of vehicles data in a database of the cloud.
And step S104, carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data.
The target scene can be a starting scene, a driving scene and a parking and warehousing scene. Wherein the driving scene may include at least one of: urban street roads, expressway roads, mountain roads, rural roads.
In an optional embodiment, the vehicle driving data may be preprocessed before the scene recognition is performed on the vehicle driving data, so as to obtain a preprocessed result, where the preprocessing includes at least one of: null value filling, abnormal value replacement, time sequence sorting and outlier rejection. And then identifying the vehicle driving data based on the preprocessing result to obtain a target scene of the vehicle driving data.
Fig. 3 is a schematic diagram of a scene recognition module according to an embodiment of the present invention, as shown in the figure, after big data is acquired by a cloud, preprocessing is performed first, then, a trip is divided, and then, the big data enters the scene recognition module to perform scene recognition. When recognizing the scene, the current scene of the vehicle may be judged according to the driving state of the vehicle and the generated real-time data, and optionally, the current driving stage of the vehicle may be judged according to a preset rule, and then the current scene of the target vehicle may be obtained based on the driving stage determination and the real-time data generated by the vehicle. The preset rule can be set by self, and the current running state of the vehicle can be judged based on the preset rule.
In another optional embodiment, when the vehicle is judged to be in a starting stage, the highest vehicle speed, the running distance and the running time of the vehicle are collected. When the vehicle is judged to be in a vehicle driving stage, firstly, feature extraction is carried out on the vehicle, the highest vehicle speed, the average vehicle speed, the driving time, the acceleration, the steering times, the lane changing times, the altitude and the driving track of the vehicle are extracted, and then the driving scene is judged through a logistic regression algorithm. Wherein, the driving scene can be urban street road, expressway, mountain road, country road. When the vehicle is judged to be in a parking garage, the highest speed, the driving distance, the driving duration and the like of the vehicle need to be acquired. After the scene recognition is finished, a target scene of the vehicle driving data can be obtained, and the scene recognition module can automatically output the scene and is used for subsequently extracting the target characteristics of the vehicle driving data.
And step S106, extracting the target driving characteristics of the vehicle driving data based on the target scene.
The vehicle travel data may include, but is not limited to, at least one of: the system comprises a steering lamp state, a maximum vehicle speed, an average vehicle speed, a maximum acceleration, an average acceleration, the number of steering wheel corner changes, the maximum steering wheel corner, the number of lane changes, the number of accelerations, the number of rapid turns, the number of overspeed, the driving duration and the driving mileage.
The target driving characteristics may include, but are not limited to, a maximum vehicle speed, an average vehicle speed, a driving time, an acceleration, a number of turns, a number of lane changes, an altitude, and a driving trajectory.
In an alternative embodiment, the target driving characteristics corresponding to each target trip in the vehicle driving data may be extracted based on the trip scene, so as to obtain the target driving characteristics of the vehicle driving data. The target travel can be a driving travel, the target travel can also be a starting travel, and the target travel can also be a parking and warehousing travel.
And step S108, determining the driving proficiency of the target object in the target scene based on the target running characteristics.
The driving proficiency may be an operation proficiency level of the target object when the target object drives the vehicle.
In an alternative embodiment, the target driving characteristics and the target scene corresponding to the target driving characteristics can be identified by using the first proficiency model, and the driving proficiency of the target object in the target scene is determined. Wherein, the first proficiency model is obtained by training first training data in a supervised training mode, and the first training data comprises: a first sample driving characteristic, a first sample scene, and sample label information. The first sample scene is a scene corresponding to the first sample driving characteristic, and the sample label information is used for describing the driving proficiency of the target object in the sample scene.
In another alternative embodiment, the target driving characteristics and the target scene corresponding to the target driving characteristics may be identified by using the second proficiency model, and the driving proficiency of the target object in the target scene may be determined. Wherein the second proficiency model is obtained by training second training data in an unsupervised training mode, and the second training data comprises: a second sample driving characteristic, a second sample scene, and a sample weight value. The second sample scene is a scene corresponding to the second sample driving feature, wherein the sample weight values are used for describing the second sample scene to account for the weight values of all the sample scenes.
In this step, the target scene and the target driving characteristics are recognized by the first proficiency model or the second proficiency model, so that the efficiency of acquiring the driving proficiency of the target object in the target scene can be improved, and the accuracy of evaluating the driving proficiency of the target object can be further improved.
Fig. 4 is a schematic diagram of a proficiency evaluation module according to an embodiment of the present invention, where as shown in the drawing, after a scene recognition result is obtained and a feature of the scene recognition result is extracted, proficiency evaluation needs to be performed based on extracted feature data, and optionally, a model may be trained according to the extracted feature data, and specifically, the model may be divided into a supervised learning model logistic regression method, an unsupervised learning model Analytic Hierarchy Process (AHP), and an unsupervised learning model portable network penetration tool (EW), which is used for analysis, and an evaluation result is finally output.
Through the steps, vehicle running data of a target vehicle is obtained firstly, wherein the target vehicle is driven by a target object; secondly, carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data; then extracting target driving characteristics of the vehicle driving data based on the target scene; and finally, determining the driving proficiency of the target object in the target scene based on the target driving characteristics. It is easy to think that, the target characteristics of the vehicle driving data of the target vehicle are extracted based on the target scene, and then the driving proficiency level under the target scene is analyzed based on the target characteristics, so that the driving proficiency level evaluation accuracy of the driver under the target scene can be further improved, and the technical problem that the driving proficiency level evaluation accuracy of the driver under different driving scenes is low by adopting the prior art is solved.
Optionally, performing scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data, including: the method comprises the following steps of preprocessing vehicle running data to obtain a preprocessing result, wherein the preprocessing includes at least one of the following steps: filling null values, replacing abnormal values, sequencing time and removing outliers; and identifying the vehicle driving data based on the preprocessing result to obtain a target scene of the vehicle driving data.
In an optional embodiment, the vehicle driving data may be sorted according to the time of creating the vehicle driving data, the abnormal value and the repeated value may be deleted, the null value may be filled with an integer 0, and meanwhile, irrelevant fields may be eliminated, so that the vehicle driving data may be preprocessed, and the accuracy of scene recognition using the vehicle driving data may be further improved.
In another optional embodiment, outliers can be deleted by using methods such as a box plot and a quartering method based on a reasonable range of the vehicle driving data, so that data with higher accuracy is obtained, and the accuracy of scene recognition by using the vehicle driving data is improved. The box plot may also be referred to as a box plot, which is a statistical graph for displaying data dispersion. The quartering method can also be called as quartering method sampling, and is a division operation method that each sample is piled into a uniform conical shape, pressed into a frustum, and divided into quarters by a cross-shaped frame.
Optionally, the target scene includes at least one travel scene, and the identifying of the vehicle driving data based on the preprocessing result obtains the target scene of the vehicle driving data, including: determining at least one driving cycle of the target vehicle based on the preprocessing result and the vehicle driving data, wherein the driving cycle is used for representing a driving range from ignition to flameout of the target vehicle; determining at least one target trip of the target vehicle based on a preset time interval and at least one driving cycle; and carrying out scene recognition on at least one target trip to obtain a trip scene corresponding to each target trip.
The preset time interval can be set by itself.
The target travel can be a driving travel, a starting travel, a parking and warehousing travel and the like.
The travel scene can be a city street road, an expressway, a mountain road, a country road and the like.
The above-described driving cycle may be used to represent a driving range from one ignition of the target vehicle to a misfire. The driving cycle is the minimum unit of driving travel, the same travel consists of different driving cycles, the ignition switch is ON, and the engine or motor speed is effectively the start of the driving cycle. And when the ignition switch is in LOCK, the rotating speed of the engine or the motor is invalid, or the time difference between the front data and the rear data exceeds a preset time interval, the driving cycle is ended. The preset time interval can be set automatically, when the time difference between one ignition and flameout exceeds the preset time interval, the driving cycle can be finished, and sudden flameout caused by an emergency can be effectively prevented from being finished as one driving cycle by setting the preset time interval, so that the misjudgment rate is effectively reduced.
In an alternative embodiment, when the beginning and end of the driving cycle are detected, the beginning and end of the driving cycle may be identified for quick recognition.
In another alternative embodiment, the driving cycles may be merged as one trip and the indication of the start and end of the trip may be added when the interval between the end time of the last driving cycle and the start time of the next driving cycle is less than the preset time interval to facilitate quick recognition.
When the vehicle starting scene is judged, the start of the driving cycle can be regarded as the starting point of the data window in the vehicle starting scene, and when the V is max >v,M acc >m,T acc >And when the t condition is met, exiting the data window, dividing the fragment data into vehicle starting scenes and making corresponding identification. Wherein, V max 、M acc 、T acc V, m and t are respectively the maximum vehicle speed, the accumulated running distance and the accumulated running time in the data window and the maximum vehicle speed threshold, the running distance threshold and the running time threshold in the vehicle starting scene. Wherein the respective threshold values may be determined by expert experience or by real-vehicle calibration experiments.
When the driving scene of the vehicle is judged, the scene in the driving process can be identified, wherein the driving scene can comprise urban street roads, expressways, mountain roads and country roads.
In an alternative embodiment, the vehicle starting scene end identifier may be regarded as a starting point of a data window in a vehicle driving scene, the driving cycle end identifier may be regarded as an end point of the data window, and the data feature of the segment may be extracted. Wherein the data characteristics may include: the maximum speed, average speed, driving time, maximum acceleration, average acceleration, steering times, lane change times, steering wheel angle change times, steering wheel average speed, steering wheel maximum angle, average altitude, maximum altitude, average curvature of driving track, etc. may be marked as x for convenience of subsequent discussion 1 、x 2 、x 3 、x 4 …x n And taking a logistic regression algorithm as a model algorithm of the scene recognition module to perform off-line countingAccording to training, scene labels can be acquired through real-vehicle experiment acquisition or questionnaires, 4 binary classifiers for the 4 scenes are finally obtained, the probability value of each scene is output according to the following formula, and the scene with the maximum probability value is taken as a prediction scene:
Figure BDA0003622622850000091
wherein, w 1 、w 2 、w 3 …w n The method is characterized in that the method is a method for determining the probability of a current scene, and is characterized in that the method comprises the following steps of b is the weight of different characteristics, b is a constant term, e is a natural base number, and p is the probability of the current scene. And finally, making scene identification for the data segments according to the scenes identified by the algorithm.
When the parking and warehousing scene is judged, the driving cycle ending mark can be regarded as the starting point of the data window in the parking and warehousing scene, and then the forward backward pushing is carried out, when V is max >v,M acc >m,T acc >When the t condition is simultaneously met, exiting the data window, and dividing the segment data into vehicle starting scenes, wherein V max 、M acc 、T acc And v, m and t are respectively expressed as the maximum vehicle speed, the accumulated running distance, the accumulated running time and the maximum vehicle speed threshold, the running distance threshold and the running time threshold in a starting scene in a data window. Wherein the respective threshold values may be determined by expert experience or by real-vehicle calibration experiments.
After the scene recognition is finished, the scene recognition module outputs identification information of driving cycle division, trip division and scene division of the offline vehicle networking data, so that target driving characteristics corresponding to a target scene can be extracted from vehicle driving data, and the driving proficiency evaluation accuracy of a target object in the target scene is improved.
Optionally, extracting the target driving feature of the vehicle driving data based on the target scene includes: and extracting target running characteristics corresponding to each target travel in the vehicle running data based on the travel scene.
In an optional embodiment, the driving characteristics corresponding to each trip can be obtained through scene extraction, and the target driving characteristics corresponding to each target trip in the vehicle driving data are extracted.
Wherein, according to the scene result identified by the scene identification module, extracting the characteristics of each different scene comprises:
when the target scene is vehicle starting, relevant data information such as the state of a turn light, the maximum vehicle speed, the running time, the running mileage, the number of rapid acceleration and the like can be extracted.
When the driving scene is an urban street, the relevant data information such as the maximum vehicle speed, the average vehicle speed, the maximum acceleration, the average acceleration, the number of rapid accelerations, the number of rapid turns, the number of overspeed, and the like can be extracted.
When the driving scene is an expressway, relevant data information such as a maximum vehicle speed, an average vehicle speed, a maximum acceleration, an average acceleration, the number of times of change of a steering wheel angle, a maximum steering wheel angle, the number of times of lane change, the number of accelerations, and the number of rapid accelerations may be extracted.
When the driving scene is a mountain road, relevant data information such as the maximum vehicle speed, the average vehicle speed, the number of rapid acceleration times, the number of rapid deceleration times and the like can be extracted.
In the case where the driving scene is a rural road, the relevant data information such as the maximum vehicle speed, the average vehicle speed, the maximum acceleration, the average acceleration, the number of rapid accelerations, the number of rapid turns, the number of overspeed, and the like may be extracted.
Under the condition that the target scene is a parking garage, relevant data information such as the highest vehicle speed, the driving time length, the driving mileage, the emergency acceleration times and the like can be extracted.
Optionally, determining the driving proficiency of the target object in the target scene based on the target driving characteristics comprises: identifying a target scene and target driving characteristics by using a first proficiency model, and determining the driving proficiency of a target object in the target scene; wherein the first proficiency model is trained from first training data, the first training data comprising: the target object driving control method comprises a first sample driving characteristic, a first sample scene and sample label information, wherein the first sample scene is a scene corresponding to the first sample driving characteristic, and the sample label information is used for describing the driving proficiency of the target object in the sample scene.
The first learning degree model may be a supervised learning model. The supervised learning may be that, if there is label data corresponding to driving proficiency in different driving scenes, for example, label data obtained by methods such as expert scoring, questionnaire, real vehicle calibration, and the like, proficiency evaluation values of the corresponding driving scenes may be output through a logistic regression algorithm.
And (3) taking the proficiency value under the scene as a label, and performing offline data training of a logistic regression algorithm by using the extracted different characteristics to obtain the following formula:
Figure BDA0003622622850000111
wherein, w 1 、w 2 、w 3 …w n And b is a constant term, e is a natural base number, and y is a predictive evaluation value of the current scene.
And (3) taking the final proficiency evaluation value as a label and the scene proficiency p output in the previous step as a characteristic, and performing offline data training by using the same method to obtain the following formula:
Figure BDA0003622622850000112
wherein v is 1 、v 2 、v 3 …v n And the weight under different characteristics is obtained, c is a constant term, e is a natural base number, y is a proficiency scene score, and s is a final current driver proficiency evaluation score which is multiplied by 100 to obtain a percentile numerical value.
Optionally, determining the driving proficiency of the target object in the target scene based on the target driving characteristics comprises: identifying the target scene and the target driving characteristics by using the second proficiency model, and determining the driving proficiency of the target object in the target scene; wherein the second proficiency model is trained from second training data, the second training data comprising: the second sample scene is a scene corresponding to the second sample driving feature, and the sample weight values are used for describing the weight values of the second sample scene in all the sample scenes.
The second learning degree model may be an unsupervised learning model. The unsupervised learning means that under the condition that corresponding driving proficiency label data cannot be obtained, the established multi-scene characteristic system can be utilized, quantitative evaluation is carried out on the driving proficiency of the driver according to the scene system through a model type Hierarchy Analysis (AHP) and a model portable network penetrating tool (EW), and the unsupervised learning specifically comprises the following steps:
in the first step, all feature data are subjected to non-dimensionalization and normalization processing. Where the non-dimensionalization process can be replaced by a suitable variable, an equation relating to the physical quantity or a whole unit is removed. The normalization process may be to convert multiple sets of data according to a certain format so that the data can be normalized.
Secondly, an unsupervised learning model type Analytic Hierarchy Process (AHP for short) is applied to divide the evaluation target into a target layer, a criterion layer and a sub-criterion layer, wherein the criterion layer can be provided with m criteria, each criterion comprises n criteria 1 、n 2 …n m The sub-criteria are constructed by constructing a comparison determination matrix, and the weight vector A of the criterion layer is obtained as { λ ═ λ } 1 ,λ 2 ,…,λ m H, sub-criterion layer weight vector U ═ μ 1 ,μ 2 ,…,μ n }. The evaluation target can be a target driving characteristic and a target scene corresponding to the target driving characteristic, the target layer can be used for proficiency evaluation, the criterion layer can be used for evaluation of different driving scenes, and the sub-criterion layer can be used for characteristic evaluation under different scenes.
And thirdly, using the portable network penetration tool to obtain a sub-criterion entropy weight vector, wherein V is { V1, V2, …, vn }, wherein the sub-criterion entropy weight vector can be an entropy weight vector of a sub-criterion layer obtained by an entropy weight method.
Fourthly, combining the sub-criterion layer weight vector U and the sub-criterion entropy weight vector V calculated by the portable network penetration tool into a vector Z ═ Z 1 ,z 2 ,…,z n And (c) the step of (c) in which,
Figure BDA0003622622850000121
thereby, comprehensive weights of features in each scene can be obtained.
And fifthly, obtaining the scores of all scenes by using the vector Z, obtaining the final score of the proficiency by combining the weight vector A of the criterion layer, and linearly mapping the final score to a score interval of 0-100.
After the second training degree model is trained, the driving proficiency in the acquired target scene can be converted into engineering codes and deployed on a cloud server, and the driving proficiency of the driver can be evaluated monthly or quarterly.
Optionally, the target scene comprises at least one of: the system comprises a starting scene, a driving scene and a parking and warehousing scene, wherein the driving scene comprises at least one of the following scenes: urban street roads, expressways, mountain roads, country roads; the vehicle travel data includes at least one of: the system comprises a steering lamp state, a maximum vehicle speed, an average vehicle speed, a maximum acceleration, an average acceleration, the number of steering wheel corner changes, the maximum steering wheel corner, the number of lane changes, the number of accelerations, the number of rapid turns, the number of overspeed, the driving duration and the driving mileage.
In the driving process of the vehicle, the steering wheel is rotated due to the need of changing the driving direction, so as to generate the steering angle, and in an alternative embodiment, the number of times of the steering wheel angle change may be the number of times of the steering wheel angle change.
Through the steps, the multi-scene characteristic construction can be realized, so that proficiency evaluation on the driver is more accurate and comprehensive. Considering that the same driving behavior has different influences on driving proficiency evaluation under different driving scenes, the method starts from scenes such as high speed, countryside, city, mountain road, parking and the like, designs and screens different driving characteristics highly related to the driving proficiency, and constructs a multi-dimensional evaluation system. Moreover, the application scenes suitable for proficiency evaluation are wider through the application of supervised learning and unsupervised learning algorithms, optionally label data of driving proficiency can be obtained through questionnaire survey, accident times, maintenance times and other channels, and the driving proficiency evaluation is output by combining the supervised learning algorithms such as multi-scene features and logistic regression; if the label data cannot be obtained due to the limitation of data conditions, the driving proficiency evaluation of the driver can be output by utilizing a constructed multi-dimensional driving capability evaluation system, combining algorithms such as AHP & EW and the like and utilizing an unsupervised learning algorithm.
Example 2
According to an embodiment of the present invention, a vehicle control device is further provided, where the device may execute the vehicle control method in the foregoing embodiment, and a specific implementation manner and a preferred application scenario are the same as those in the foregoing embodiment, and are not described herein again.
Fig. 5 is a schematic diagram of a vehicle control apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including: an obtaining module 502 for obtaining vehicle driving data of a target vehicle, wherein the target vehicle is driven by a target object; the identification module 504 is configured to perform scene identification on the vehicle driving data to obtain a target scene of the vehicle driving data; an extraction module 506, configured to extract a target driving feature of the vehicle driving data based on the target scene; and the determining module 508 is used for determining the driving proficiency of the target object in the target scene based on the target running characteristics.
Optionally, the identifying module 504 includes: the vehicle driving data preprocessing unit is used for preprocessing vehicle driving data to obtain a preprocessing result, wherein the preprocessing includes at least one of the following steps: filling null values, replacing abnormal values, sequencing time and removing outliers; and the first identification unit is used for identifying the vehicle driving data based on the preprocessing result to obtain a target scene of the vehicle driving data.
Optionally, the first identification unit comprises: a first determining subunit, configured to determine at least one driving cycle of the target vehicle based on the preprocessing result and the vehicle driving data, wherein the driving cycle is used for representing a driving range from ignition to flameout of the target vehicle; a second determining subunit for determining at least one target trip of the target vehicle based on a preset time interval and at least one driving cycle; and the identification subunit is used for carrying out scene identification on at least one target trip to obtain a trip scene corresponding to each target trip.
Optionally, the identification subunit is further configured to extract a target driving feature corresponding to each target trip in the vehicle driving data based on the trip scenario.
Optionally, the determining module 508 further includes: the second recognition unit is used for recognizing the target scene and the target driving characteristics by using the first training degree model and determining the driving proficiency of the target object in the target scene;
wherein the first proficiency model is trained from first training data, the first training data comprising: the target object driving control method comprises a first sample driving characteristic, a first sample scene and sample label information, wherein the first sample scene is a scene corresponding to the first sample driving characteristic, and the sample label information is used for describing the driving proficiency of the target object in the sample scene.
Optionally, the determining module 508 further includes:
the third recognition unit is used for recognizing the target scene and the target driving characteristics by using the second training degree model and determining the driving proficiency of the target object in the target scene;
wherein the second proficiency model is trained from second training data, the second training data comprising: the second sample scene is a scene corresponding to the second sample driving feature, and the sample weight values are used for describing the weight values of the second sample scene in all the sample scenes.
Optionally, the target scene comprises at least one of: the system comprises a starting scene, a driving scene and a parking and warehousing scene, wherein the driving scene comprises at least one of the following scenes: urban street roads, expressways, mountain roads, country roads; the vehicle travel data includes at least one of: the system comprises a steering lamp state, a maximum vehicle speed, an average vehicle speed, a maximum acceleration, an average acceleration, the number of steering wheel corner changes, the maximum steering wheel corner, the number of lane changes, the number of accelerations, the number of rapid turns, the number of overspeed, the driving duration and the driving mileage.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to execute the vehicle control method in embodiment 1 described above.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a nonvolatile storage medium including a stored program. Wherein the program is run to execute the vehicle control method in the above embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A control method of a target vehicle, characterized by comprising:
acquiring vehicle driving data of a target vehicle, wherein the target vehicle is driven by a target object;
carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data;
extracting target driving characteristics of the vehicle driving data based on the target scene;
determining the driving proficiency of the target object in the target scene based on the target driving characteristics.
2. The method of claim 1, wherein performing scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data comprises:
preprocessing the vehicle driving data to obtain a preprocessing result, wherein the preprocessing includes at least one of the following steps: filling null values, replacing abnormal values, sequencing time sequences and removing outliers;
and identifying the vehicle driving data based on the preprocessing result to obtain a target scene of the vehicle driving data.
3. The method of claim 2, wherein the target scene comprises at least one travel scene, and the identifying the vehicle driving data based on the preprocessing result to obtain the target scene of the vehicle driving data comprises:
determining at least one driving cycle of the target vehicle based on the preprocessing result and the vehicle driving data, wherein the driving cycle is used for representing a driving range from ignition to flameout of the target vehicle;
determining at least one target trip of the target vehicle based on a preset time interval and the at least one driving cycle;
and carrying out scene recognition on the at least one target trip to obtain a trip scene corresponding to each target trip.
4. The method of claim 3, wherein extracting the target driving characteristics of the vehicle driving data based on the target scene comprises:
and extracting target running characteristics corresponding to each target running in the vehicle running data based on the running scene.
5. The method of claim 1, wherein determining the driving proficiency of the target object at the target scene based on the target driving characteristics comprises:
identifying the target scene and the target driving characteristics by using a first training degree model, and determining the driving proficiency of the target object in the target scene;
wherein the first proficiency model is trained from first training data, the first training data comprising: the driving skill level detection method comprises the following steps of a first sample driving characteristic, a first sample scene and sample label information, wherein the first sample scene is a scene corresponding to the first sample driving characteristic, and the sample label information is used for describing the driving skill level of a target object in the sample scene.
6. The method of claim 1, wherein determining the driving proficiency of the target object at the target scene based on the target driving characteristics comprises:
identifying the target scene and the target driving characteristics by using a second training degree model, and determining the driving proficiency of the target object in the target scene;
wherein the second proficiency model is trained from second training data, the second training data comprising: the second sample scene is a scene corresponding to the second sample driving feature, and the sample weight values are used for describing the weight values of the second sample scene in all the sample scenes.
7. The method of claim 1, wherein the target scene comprises at least one of: the method comprises the following steps of starting scene, driving scene and parking and warehousing scene, wherein the driving scene comprises at least one of the following scenes: urban street roads, expressways, mountain roads, country roads; the vehicle travel data includes at least one of: the system comprises a steering lamp state, a maximum vehicle speed, an average vehicle speed, a maximum acceleration, an average acceleration, the number of steering wheel corner changes, the maximum steering wheel corner, the number of lane changes, the number of accelerations, the number of rapid turns, the number of overspeed, the driving duration and the driving mileage.
8. A control device of a target vehicle, characterized by comprising:
an acquisition module for acquiring vehicle travel data of a target vehicle, wherein the target vehicle is driven by a target object;
the recognition module is used for carrying out scene recognition on the vehicle driving data to obtain a target scene of the vehicle driving data;
the extraction module is used for extracting target driving characteristics of the vehicle driving data based on the target scene;
and the determining module is used for determining the driving proficiency of the target object in the target scene based on the target running characteristics.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, cause the one or more processors to perform the control method of the target vehicle of any one of claims 1-7.
10. A non-volatile storage medium, characterized in that it comprises a stored program, wherein the control method of a target vehicle according to any one of claims 1-7 is executed in a processor controlling the device on which the program runs.
CN202210462846.1A 2022-04-28 2022-04-28 Vehicle control method and device and electronic equipment Pending CN114802264A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210462846.1A CN114802264A (en) 2022-04-28 2022-04-28 Vehicle control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210462846.1A CN114802264A (en) 2022-04-28 2022-04-28 Vehicle control method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114802264A true CN114802264A (en) 2022-07-29

Family

ID=82510072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210462846.1A Pending CN114802264A (en) 2022-04-28 2022-04-28 Vehicle control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114802264A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862333A (en) * 2022-12-07 2023-03-28 东南大学 Expressway vehicle-road cooperative scene and function division method considering information flow characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862333A (en) * 2022-12-07 2023-03-28 东南大学 Expressway vehicle-road cooperative scene and function division method considering information flow characteristics
CN115862333B (en) * 2022-12-07 2023-11-21 东南大学 Expressway vehicle-road cooperative scene and function division method considering information flow characteristics

Similar Documents

Publication Publication Date Title
US20210197851A1 (en) Method for building virtual scenario library for autonomous vehicle
CN107492251B (en) Driver identity recognition and driving state monitoring method based on machine learning and deep learning
Fan et al. A closer look at Faster R-CNN for vehicle detection
CN113609016B (en) Method, device, equipment and medium for constructing automatic driving test scene of vehicle
CN113155173B (en) Perception performance evaluation method and device, electronic device and storage medium
CN110188482B (en) Test scene creating method and device based on intelligent driving
CN114493191B (en) Driving behavior modeling analysis method based on network about vehicle data
JP6511982B2 (en) Driving operation discrimination device
Li et al. Cluster naturalistic driving encounters using deep unsupervised learning
Xue et al. A context-aware framework for risky driving behavior evaluation based on trajectory data
Sikirić et al. Image representations on a budget: Traffic scene classification in a restricted bandwidth scenario
CN113065902A (en) Data processing-based cost setting method and device and computer equipment
CN112466118A (en) Vehicle driving behavior recognition method, system, electronic device and storage medium
CN114802264A (en) Vehicle control method and device and electronic equipment
CN115797403A (en) Traffic accident prediction method and device, storage medium and electronic device
CN106777350B (en) Method and device for searching pictures with pictures based on bayonet data
CN112559968A (en) Driving style representation learning method based on multi-situation data
CN116753938A (en) Vehicle test scene generation method, device, storage medium and equipment
CN114822044B (en) Driving safety early warning method and device based on tunnel
CN115129886A (en) Driving scene recognition method and device and vehicle
CN112257869A (en) Fake-licensed car analysis method and system based on random forest and computer medium
CN116997890A (en) Generating an unknown unsafe scenario, improving an automated vehicle, and a computer system
Wang et al. Driver modeling based on vehicular sensing data
CN114999134B (en) Driving behavior early warning method, device and system
Meng et al. Vehicle action prediction using artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination