CN116486604A - Scene complexity evaluation method and system - Google Patents

Scene complexity evaluation method and system Download PDF

Info

Publication number
CN116486604A
CN116486604A CN202310151080.XA CN202310151080A CN116486604A CN 116486604 A CN116486604 A CN 116486604A CN 202310151080 A CN202310151080 A CN 202310151080A CN 116486604 A CN116486604 A CN 116486604A
Authority
CN
China
Prior art keywords
scene
difficulty
typical
vehicle
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310151080.XA
Other languages
Chinese (zh)
Inventor
谭墍元
冯岩
郭伟伟
薛晴婉
胡钰琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Technology
Original Assignee
North China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Technology filed Critical North China University of Technology
Priority to CN202310151080.XA priority Critical patent/CN116486604A/en
Publication of CN116486604A publication Critical patent/CN116486604A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Chemical & Material Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a scene complexity evaluation method and a scene complexity evaluation system, comprising the following steps: inputting a parameter set for describing a scene to be evaluated into a scene complexity evaluation model for outputting a scene complexity evaluation result, wherein the construction of the scene complexity evaluation model comprises the following steps: selecting a plurality of typical scenes; acquiring a parameter set for describing the typical scene; acquiring driving data corresponding to the typical scene, and acquiring a difficulty tag corresponding to the typical scene based on the driving data; and taking the parameter set of the typical scene as input, taking the difficulty label of the typical scene as output, and constructing a scene complexity evaluation model through an SVM or decision tree algorithm. According to the method, the post-driving characteristics are analyzed, the driving difficulty is used as a representation form of scene complexity, the difficulty label is used as output, the scene constituent elements are used as front entry input, and the scene complexity evaluation model is constructed, so that the measurement of a complex traffic scene is realized.

Description

Scene complexity evaluation method and system
Technical Field
The invention belongs to the technical field of traffic, and particularly relates to a scene complexity evaluation method and system.
Background
The quantification of the scene complexity is beneficial to converting the road traffic environment from a complex state to a simple state, and has important significance for the research of the driving environment.
The existing research on the complexity of the traffic scene is mainly based on environmental factors or influence weights of various elements in the scene for analysis. Calculating the complexity of the environmental factors by a calculation or expert voting method, and carrying out weight summation on each complex factor in the environment according to subjective weight so as to obtain the complexity assessment; and quantifying the influence weight of each part of scene element, analyzing the corresponding complexity index of the component elements influencing the traffic complexity, and obtaining the complexity grade of the whole scene, thereby realizing the comprehensive quantitative evaluation of the scene complexity.
However, the method is mostly based on expert scoring, is doped with more subjective factors, and lacks a certain objectivity.
Disclosure of Invention
Aiming at the problems, the invention provides a scene complexity evaluation method and a scene complexity evaluation system, which are used for analyzing post-hoc driving characteristics, measuring complex traffic scenes by taking driving difficulty as a representation form of the scene complexity, and constructing a model for evaluating the scene complexity based on the driving difficulty.
In one aspect, the method for evaluating scene complexity provided by the invention comprises the following steps: inputting a parameter set for describing a scene to be evaluated into a scene complexity evaluation model for outputting a scene complexity evaluation result, wherein the construction of the scene complexity evaluation model comprises the following steps: selecting a plurality of typical scenes; acquiring a parameter set for describing the typical scene; acquiring driving data corresponding to the typical scene, and acquiring a difficulty tag of the typical scene based on the driving data; and taking the parameter set of the typical scene as input, taking the difficulty label of the typical scene as output, and constructing a scene complexity evaluation model through an SVM or decision tree algorithm.
Preferably, the parameter set includes: lane width, lane length, lane radius of curvature.
Preferably, the following factors are considered when selecting the typical scene: straight road, single turn road, continuous turn road.
Preferably, the driving data corresponding to the typical scene is driving data in the driving process in the typical scene.
Preferably, the driving data is driving data under a specified task.
Preferably, based on the driving data, a difficulty tag of the typical scene is obtained, and the specific method comprises the following steps: based on the driving data, obtaining a characterization feature for characterizing driving difficulty; calculating a difficulty characterization value of the typical scene based on the characterization features; and clustering the obtained difficulty characterization values of the plurality of typical scenes by adopting a K-means method, and distributing difficulty labels for the typical scenes according to a clustering result.
Preferably, the characterization features are features related to the frequency of vehicle collisions, the severity of vehicle collisions, and the stability of the vehicle.
Preferably, the characterizing features include: the vehicle collision frequency, the I level collision frequency, the II level collision frequency, the III level collision frequency, the IV level collision frequency, the I level stabilization frequency, the II level stabilization frequency, the III level stabilization frequency and the IV level stabilization frequency; the vehicle collision frequency refers to the frequency of collision of the vehicle, the I-level collision frequency refers to the frequency of deceleration of the vehicle after the collision of the vehicle, the II-level collision frequency refers to the frequency of stopping running of the vehicle after the collision of the vehicle, the III-level collision frequency refers to the frequency of in-situ turning of the vehicle for less than one circle after the collision of the vehicle, the IV-level collision frequency refers to the frequency of in-situ turning of the vehicle for more than one circle after the collision of the vehicle, the I-level stabilization frequency refers to the frequency of non-collision of the vehicle, the II-level stabilization frequency refers to the frequency of non-collision of the vehicle and stable running after shaking of the vehicle, the III-level stabilization frequency refers to the frequency of non-collision of the vehicle and less than one circle of turning of the vehicle, and the IV-level stabilization frequency refers to the frequency of non-collision of the vehicle for more than one circle.
Preferably, based on the driving data, a difficulty tag of the typical scene is obtained, and the method further comprises the following steps: based on the driving data, obtaining an average speed corresponding to the typical scene; based on the characterization features and the average speed, an input set is obtained, difficulty labels distributed according to clustering results are used as output, BP neural network is adopted for supervised learning, and a difficulty label output model is constructed; training the difficulty label output model until the model converges, and finishing training; and obtaining a difficulty label true value of the typical scene based on the trained difficulty label output model.
Preferably, when the scene complexity evaluation model is constructed, a difficulty tag true value of the typical scene is taken as output.
In another aspect, the present invention provides a scene complexity evaluation system, including: an evaluation unit configured to: inputting a parameter set for describing a scene to be evaluated into a scene complexity evaluation model to output a scene complexity evaluation result; a model construction unit configured to: selecting a typical scene, and acquiring a parameter set for describing the typical scene; acquiring driving data corresponding to the typical scene, and acquiring a difficulty tag based on the driving data; and taking the parameter set of the typical scene as input, taking the difficulty label as output, and constructing a scene complexity evaluation model through an SVM or decision tree algorithm.
Compared with the prior art, the invention has the following beneficial effects:
(1) According to the method, the post-driving characteristics are analyzed, the driving difficulty is used as a representation form of scene complexity, the difficulty label is used as output, the scene constituent elements are used as front entry input, and the scene complexity evaluation model is constructed, so that the measurement of a complex traffic scene is realized.
(2) The evaluation method provided by the invention has strong objectivity, can reduce the influence of subjective factors, can directly output the difficulty label of the current driving task by inputting the scene parameter set (namely, the parameter set obtained after the quantification of the scene constituent elements) into the model, and further judge the complexity of the driving condition, so as to judge the safety condition of the traffic environment, and provide a new method for the aspects of driving environment design, traffic safety early warning, traffic control and the like.
(3) In different scenes, different tasks are designated during driving (i.e. different purposes are achieved during driving), and the evaluation of the scene complexity is affected to a certain extent. The driving data of the invention is preferably the driving data under the appointed task, and the influence of the task requirement can be taken into consideration when evaluating the scene complexity.
(4) According to the method, when the difficulty characterization value is calculated, the selected characterization features belong to the data of the result type, and the difficulty characterization value obtained by calculation based on the data can more intuitively represent the driving difficulty.
(5) The speed during driving in a scene is taken as an important influencing factor of driving difficulty, and has a certain influence on the driving difficulty. When the BP network is adopted for supervised learning, the average speed corresponding to the scene is taken into consideration besides the characteristic feature for representing the driving difficulty, so that the difficulty label true value which can more accurately represent the scene difficulty can be obtained, and the difficulty label true value can be directly obtained according to the characteristic feature and the average speed, so that redundant calculation is not needed, and the method is convenient and quick.
Drawings
FIG. 1 is a schematic view of a typical scenario selected in embodiment 1 of the present invention;
FIG. 2 is a graph showing the comparison of the speed data before and after preprocessing in example 1 of the present invention;
FIG. 3 is a graph showing the comparison of the predicted value and the expected value of the BP neural network test set according to the embodiment 1 of the present invention;
FIG. 4 is a graph showing the performance of a model constructed by an SVM algorithm in example 1 of the present invention;
FIG. 5 is a graph of the performance of the model constructed by the decision tree algorithm in example 1 of the present invention.
Detailed Description
The following description of the technical solutions in the embodiments of the present invention will be clear and complete, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment provides a scene complexity evaluation method, which comprises the following steps:
the parameter set for describing the scene to be evaluated is input into the scene complexity evaluation model to output a scene complexity evaluation result.
The construction of the scene complexity evaluation model comprises the following steps:
(1) Selecting a plurality of typical scenes; a set of parameters describing the representative scene is obtained.
The following factors are considered when selecting the typical scene: straight line road, single turn road, continuous turn road for the driving scene of various highway sections that probably appears in the simulation actual driving. In the present embodiment, when a typical scene is selected, 3 typical game tracks are selected and divided into 13 typical scenes by means of the Project CARS racing game platform, as shown in fig. 1.
The parameter set is obtained by quantizing the constituent elements for describing the scene. The driving scene in the actual driving task has a plurality of components and is related to each other, and the complexity of the driving scene is the comprehensive reaction under the action of various scene factors. The length of a lane in a scene influences the driving time; too narrow a lane width increases psychological stress of a driver when driving, and when more participants exist in a scene, the risk of collision between vehicles is increased; the curvature radius of the lane is small, the driving difficulty is increased when the vehicle makes a turning action, and great hidden danger is caused for safe driving, and related statistics data show that traffic accidents on the curve account for more than 60% of all traffic accidents. It follows that scene lane length, lane width and lane radius of curvature are fundamental and important parameters characterizing a scene.
Based on this, in a preferred aspect of the present embodiment, the parameter set includes: lane width, lane length, lane radius of curvature. The method comprises the steps of obtaining the lane length, wherein the lane length is obtained by adopting an edge detection algorithm in an image recognition method, combining the actual length of a given track in a game scene, and combining edge detection to obtain scene pixel points. The lane length uses the average width of the lane. The method comprises the steps of obtaining the curvature radius of a lane by adopting image denoising, image binarization processing, hough transformation circle detection, morphological operation refinement road and the like; if the number of the curvature radii obtained by the Hough transform circle detection is plural, an average value thereof is taken. The parameter sets for 13 typical scenarios of this embodiment are listed as shown in table 1:
table 1 parameter sets for typical scenarios
(2) And obtaining driving data corresponding to the typical scene, and obtaining a difficulty tag of the typical scene based on the driving data.
The driving data corresponding to the typical scene is driving data in the driving process in the typical scene. In the embodiment, by means of the Project CARS racing car game platform, driving task simulation is carried out by using a game rudder, driving data of a tester in the driving process are collected, and video in the driving process is derived.
In this embodiment, as a preferable scheme, considering that in the same scene, different task requirements may have a certain influence on the determination of the driving difficulty, so the driving data is the driving data under the specified task. In the present embodiment, the specified task is selected from the following two tasks: firstly, the method can finish as soon as possible (namely, the method is preferable to consider safety and record as a safe driving task) on the premise of ensuring safety and reducing collision; secondly, ensuring safety as much as possible on the premise of striving for ranking ahead (namely, giving priority to speed and recording as a racing driving task).
Based on the driving data, obtaining a difficulty tag of the typical scene, which comprises the following specific steps:
s1, obtaining a characterization feature for characterizing driving difficulty based on the driving data;
s2, calculating a difficulty characterization value of the typical scene based on the characterization features;
s3, clustering the obtained difficulty characterization values of the plurality of typical scenes by adopting a K-means method, and distributing difficulty labels for the typical scenes according to a clustering result.
Further, in order to more conveniently and accurately characterize the driving difficulty of the typical scene, the method further comprises the following steps:
s4, obtaining the average speed corresponding to the typical scene based on the driving data;
s5, based on the characterization features and the average speed, an input set is obtained, difficulty labels distributed according to clustering results are used as output, BP neural network is adopted for supervised learning, and a difficulty label output model is constructed;
s6, training the difficulty label output model until the model converges, and finishing training;
s7, obtaining a difficulty label true value of the typical scene based on the trained difficulty label output model.
In S1, the characterizing features are features related to a vehicle collision frequency, a vehicle collision severity, and a vehicle stability. In this embodiment, the characterizing features include: the vehicle collision frequency, the I level collision frequency, the II level collision frequency, the III level collision frequency, the IV level collision frequency, the I level stabilization frequency, the II level stabilization frequency, the III level stabilization frequency and the IV level stabilization frequency; the vehicle collision frequency refers to the frequency of collision of the vehicle, namely the ratio of the total collision frequency to the running time, the I-level collision frequency refers to the frequency of vehicle deceleration after the collision of the vehicle, the II-level collision frequency refers to the frequency of vehicle stopping after the collision of the vehicle, the III-level collision frequency refers to the frequency of vehicle in-situ turning not full of one circle after the collision of the vehicle, the IV-level collision frequency refers to the frequency of vehicle in-situ turning not full of one circle after the collision of the vehicle, the I-level stabilization frequency refers to the frequency of vehicle non-collision, the II-level stabilization frequency refers to the frequency of vehicle non-collision and stable running after shaking of the vehicle, the III-level stabilization frequency refers to the frequency of vehicle non-collision and the frequency of vehicle turning not full of one circle.
In the above-mentioned characteristic features, the acquisition of event times about the severity of collision and the stability of the vehicle is based on the derived video, and the event is marked according to the corresponding situation of the event and counted; the data related to the speed and the like are obtained by adopting an image recognition method, video frames are extracted, screenshot is carried out on the speed index position, and recognition is carried out through an OCR recognition algorithm. In order to avoid influencing the calculation result and the model result in the subsequent application of the data, the event records marked manually are required to be preprocessed, ferrule data are removed, and integration is carried out; preprocessing video identification data, removing error data such as non-digital type and messy codes, and smoothing the digital type data with the identification errors, for example, comparing the preprocessing speed data with the preprocessing speed data before and after the preprocessing is shown in fig. 2; and the test samples with excessive data loss are also required to be removed.
In S2, the characterization features belong to data of a result type, namely, the post-hoc driving features, and the difficulty characterization value obtained through calculation based on the post-hoc driving features can more intuitively characterize driving difficulty. In the process, the weight is calculated by adopting an entropy weight method. It should be noted that, in order to avoid the influence of individual differences of test persons on the objectivity of the difficulty of the scene, a plurality of test persons are selected in each scene to perform driving simulation, a set of characterization features can be obtained based on driving data of each test person in the scene, and a difficulty characterization value can be obtained based on each set of characterization features. And S2, the difficulty characterization value of the typical scene obtained in the step S is the average value of the difficulty characterization values of all testers in the scene.
In S3, the difficulty label in this embodiment has 4 levels, that is, the driving difficulty level is classified into 4 levels (1, 2, 3, and 4 are sequentially marked from easy to difficult), the difficulty level is classified too much, which results in that the difficulty level is not obvious, and the difficulty level is classified too little, which results in that the scene difficulty is not clear. In order to verify the effectiveness of feature selection in this embodiment, the difficulty labels assigned after clustering are compared with subjective cognitive difficulty levels, as shown in tables 2 and 3.
TABLE 2 safe driving task difficulty tag comparison
TABLE 3 difficulty tag contrast for racing drive tasks
Scene numbering Subjective difficulty rating Ranking results
1 3 3
2 1 1
3 4 3
4 2 1
5 1 1
6 3 3
7 4 3
8 4 2
9 2 2
10 2 1
11 2 3
12 4 4
13 3 2
From the above table, it can be seen that the difficulty tag obtained in this embodiment is mostly consistent with the subjective cognitive result of the person.
In S4-S7, the average speed is also used for constructing the model in consideration of the fact that the speed of driving in the scene has a large influence on the driving difficulty. And S5, when a difficulty tag output model is constructed, outputting the model according to difficulty tags distributed by clustering results, wherein an input set of the model is obtained based on the characterization features and the average speed. Specifically, since more than one tester in each scene, the input of the model obtained based on the characterization features may be the sum of the characterization features of all testers in the scene, or may be the average of the characterization features of all testers, for example, n testers in a certain scene, and the characterization feature of the corresponding "class i collision times" is x in turn 1 、x 2 、……、x n The model obtained by the characteristic feature of "number of class I collisions" in S5 may be input as (x) 1 +x 2 +……+x n ) May be (x) 1 +x 2 +……+x n ) It should be noted that, if the sum of the characterization features of all the testers is used as the model input, the number of testers in each scene is required to be the same. Similarly, the input to the model based on the average speed may be the average of the average speeds of all the test persons in the scene.
And obtaining an input set based on the obtained characterization features and the average speed, taking difficulty labels distributed according to the clustering result as output, constructing a model to obtain difficulty label true values, and directly obtaining scene difficulty label true values by using the model without redundant calculation according to the characterization features and the average speed. The error pair of the predicted value and the expected value of the BP neural network test set in this embodiment is shown in fig. 3, and the model error result is shown in table 4.
TABLE 4 model error results
Mean squared error MAE 7.801e-06
Mean square error MSE 6.6971e-06
Root mean square error RMSE 0.0025879
Determining a coefficient R2 1
(3) Taking the parameter set of the typical scene as input, taking the difficulty label as output (here, the difficulty label can be a difficulty label distributed after clustering by a K-means method, or can be a difficulty label true value obtained through a difficulty label output model, and in the embodiment, the difficulty label true value is preferable), and constructing a scene complexity evaluation model through an SVM or decision tree algorithm.
The model performance constructed by the SVM algorithm is shown in FIG. 4, and the prediction accuracy of the SVM model is 82.93%; the model constructed by the decision tree algorithm has the performance shown in fig. 5, and the prediction accuracy of the decision tree model is 85%. It can be seen that the built SVM classifier and the decision tree model can well finish the classification purpose of driving tasks, and better performance is achieved.
The embodiment also provides a scene complexity evaluation system, which comprises:
an evaluation unit configured to: the parameter set for describing the scene to be evaluated is input into the scene complexity evaluation model to output a scene complexity evaluation result.
A model construction unit configured to:
selecting a plurality of typical scenes;
acquiring a parameter set for describing the typical scene;
acquiring driving data corresponding to the typical scene, and acquiring a difficulty tag of the typical scene based on the driving data;
and taking the parameter set of the typical scene as input, taking the difficulty label of the typical scene as output, and constructing a scene complexity evaluation model through an SVM or decision tree algorithm.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, those skilled in the art may still make modifications to the technical solutions described in the foregoing embodiments, or may make equivalent substitutions for some or all of the technical features thereof; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.

Claims (10)

1. A scene complexity assessment method, comprising the steps of:
inputting a parameter set for describing a scene to be evaluated into a scene complexity evaluation model for outputting a scene complexity evaluation result, wherein the construction of the scene complexity evaluation model comprises the following steps:
selecting a plurality of typical scenes;
acquiring a parameter set for describing the typical scene;
acquiring driving data corresponding to the typical scene, and acquiring a difficulty tag of the typical scene based on the driving data;
and taking the parameter set of the typical scene as input, taking the difficulty label of the typical scene as output, and constructing a scene complexity evaluation model through an SVM or decision tree algorithm.
2. The scene complexity assessment method according to claim 1, wherein the parameter set comprises: lane width, lane length, lane radius of curvature.
3. The scene complexity estimation method of claim 1, wherein the typical scene is selected taking into account the following factors: straight road, single turn road, continuous turn road.
4. The scene complexity evaluation method according to claim 1, wherein the driving data is driving data under a specified task.
5. The scene complexity evaluation method according to claim 1, wherein the difficulty tag of the typical scene is obtained based on the driving data, specifically comprising:
based on the driving data, obtaining a characterization feature for characterizing driving difficulty;
calculating a difficulty characterization value of the typical scene based on the characterization features;
and clustering the obtained difficulty characterization values of the plurality of typical scenes by adopting a K-means method, and distributing difficulty labels for the typical scenes according to a clustering result.
6. The scene complexity estimation method according to claim 5, wherein the characterization feature is a feature related to a frequency of vehicle collisions, a severity of vehicle collisions, a stability of a vehicle.
7. The scene complexity assessment method according to claim 6, wherein the characterization features include: the vehicle collision frequency, the I level collision frequency, the II level collision frequency, the III level collision frequency, the IV level collision frequency, the I level stabilization frequency, the II level stabilization frequency, the III level stabilization frequency and the IV level stabilization frequency;
wherein the vehicle collision frequency refers to the frequency of collision of the vehicle,
the number of I-level collisions refers to the number of vehicle decelerations after a collision of the vehicle,
the II-stage collision times are the times of stopping the vehicle after the vehicle collides,
the III-level collision times are times that the vehicle turns in place for less than one circle after the vehicle collides,
the IV-level collision times are the times of more than one circle of in-situ revolution of the vehicle after the vehicle collides,
the I-level stabilization number refers to the number of times the vehicle is stabilized without collision,
the II-stage stabilization times are times of stable running after the vehicle is shaken,
the III-level stabilization times are times when the vehicle does not collide and the vehicle turns less than one circle,
the IV-level stabilization times are times that the vehicle does not collide and the vehicle turns for more than one circle.
8. The scene complexity evaluation method of claim 5, wherein the difficulty tag of the typical scene is obtained based on the driving data, further comprising the steps of:
based on the driving data, obtaining an average speed corresponding to the typical scene;
based on the characterization features and the average speed, an input set is obtained, difficulty labels distributed according to clustering results are used as output, BP neural network is adopted for supervised learning, and a difficulty label output model is constructed;
training the difficulty label output model until the model converges, and finishing training;
and obtaining a difficulty label true value of the typical scene based on the trained difficulty label output model.
9. The scene complexity estimation method according to claim 8, wherein the scene complexity estimation model is constructed by taking a difficulty tag true value of the typical scene as an output.
10. A scene complexity evaluation system, comprising:
an evaluation unit configured to: inputting a parameter set for describing a scene to be evaluated into a scene complexity evaluation model to output a scene complexity evaluation result;
a model construction unit configured to:
selecting a plurality of typical scenes;
acquiring a parameter set for describing the typical scene;
acquiring driving data corresponding to the typical scene, and acquiring a difficulty tag of the typical scene based on the driving data;
and taking the parameter set of the typical scene as input, taking the difficulty label of the typical scene as output, and constructing a scene complexity evaluation model through an SVM or decision tree algorithm.
CN202310151080.XA 2023-02-22 2023-02-22 Scene complexity evaluation method and system Pending CN116486604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310151080.XA CN116486604A (en) 2023-02-22 2023-02-22 Scene complexity evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310151080.XA CN116486604A (en) 2023-02-22 2023-02-22 Scene complexity evaluation method and system

Publications (1)

Publication Number Publication Date
CN116486604A true CN116486604A (en) 2023-07-25

Family

ID=87220269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310151080.XA Pending CN116486604A (en) 2023-02-22 2023-02-22 Scene complexity evaluation method and system

Country Status (1)

Country Link
CN (1) CN116486604A (en)

Similar Documents

Publication Publication Date Title
WO2020244288A1 (en) Method and apparatus for evaluating truck driving behaviour based on gps trajectory data
CN111462488B (en) Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN111242484B (en) Vehicle risk comprehensive evaluation method based on transition probability
CN109671274B (en) Highway risk automatic evaluation method based on feature construction and fusion
CN111539454A (en) Vehicle track clustering method and system based on meta-learning
CN115511836B (en) Bridge crack grade assessment method and system based on reinforcement learning algorithm
CN116168356B (en) Vehicle damage judging method based on computer vision
US11120308B2 (en) Vehicle damage detection method based on image analysis, electronic device and storage medium
CN113221759A (en) Road scattering identification method and device based on anomaly detection model
CN111861667A (en) Vehicle recommendation method and device, electronic equipment and storage medium
CN114926299A (en) Prediction method for predicting vehicle accident risk based on big data analysis
CN109849926B (en) Method and system for distinguishing whether taxi is handed to others for driving
CN116486604A (en) Scene complexity evaluation method and system
CN116778460A (en) Fatigue driving identification method based on image identification
CN114333320B (en) Vehicle driving behavior risk assessment system based on RFID
CN113192340B (en) Method, device, equipment and storage medium for identifying highway construction vehicles
CN113505955A (en) User driving behavior scoring method based on TSP system
CN113673826B (en) Driving risk assessment method and system based on individual factors of driver
CN116091254B (en) Commercial vehicle risk analysis method
CN112937592B (en) Method and system for identifying driving style based on headway
CN112308136B (en) Driving distraction detection method based on SVM-Adaboost
CN117830887A (en) Taxi passenger-carrying monitoring method, electronic equipment and computer-readable storage medium
Qin et al. Convolutional neural network-based ASIL rating method for automotive functional safety
CN116702606A (en) Driving behavior risk assessment method, system, equipment and storage medium
CN117056803A (en) Commute crowd identification method based on bus IC card swiping data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination