CN112487907A - Dangerous scene identification method and system based on graph classification - Google Patents

Dangerous scene identification method and system based on graph classification Download PDF

Info

Publication number
CN112487907A
CN112487907A CN202011326019.7A CN202011326019A CN112487907A CN 112487907 A CN112487907 A CN 112487907A CN 202011326019 A CN202011326019 A CN 202011326019A CN 112487907 A CN112487907 A CN 112487907A
Authority
CN
China
Prior art keywords
scene
vehicle
traffic
traffic scene
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011326019.7A
Other languages
Chinese (zh)
Other versions
CN112487907B (en
Inventor
吕超
李景行
张钊
陆军琰
徐优志
龚建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Beijing Institute of Technology BIT
Original Assignee
SAIC Motor Corp Ltd
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd, Beijing Institute of Technology BIT filed Critical SAIC Motor Corp Ltd
Priority to CN202011326019.7A priority Critical patent/CN112487907B/en
Publication of CN112487907A publication Critical patent/CN112487907A/en
Application granted granted Critical
Publication of CN112487907B publication Critical patent/CN112487907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dangerous scene identification method and system based on graph classification, and belongs to the technical field of automobile intelligent interaction. The method comprises the steps of collecting operation information of a driver, extracting driving characteristic parameters, and collecting traffic scene information around a vehicle; extracting dynamic and static characteristics of the traffic scene according to the collected traffic scene information; representing the traffic scene as an undirected graph with node labels by using a graph method according to the collected dynamic and static characteristics of the traffic scene; and identifying the danger level of the traffic scene according to the generated undirected graph with the node labels in the traffic scene. The urban traffic environment dangerous scene recognition is realized based on the map classification, dangerous scene labels are obtained through clustering according to the driving operation information and the vehicle driving information, labels which are more in line with data distribution characteristics are generated, the dangerous scenes in the traffic scenes are accurately recognized through the driving information, the recognition accuracy of the traffic dangerous scenes is improved, the recognized traffic dangerous scenes are more in line with the actual driving environment, and the driving environment adaptability and the safety are improved.

Description

Dangerous scene identification method and system based on graph classification
Technical Field
The invention belongs to the technical field of automobile intelligent interaction, and particularly relates to a dangerous scene identification method and system based on graph classification.
Background
Along with the intelligent starting of the automobile, people hope that the automobile can understand themselves more and more according to the good experience requirement of the automobile, and the corresponding service content and the auxiliary driving can be customized according to the state and the requirement of the automobile.
The inaccurate or slow recognition of a driver for a dangerous scene is one of important reasons causing a traffic accident, and currently, a dangerous scene recognition method has many problems, specifically, when recognizing a traffic dangerous scene, the existing method mostly recognizes elements (such as vehicles, non-motor vehicles and the like) in a traffic environment respectively and calculates the overall danger degree of the scene, and has the problems of inaccurate recognition and the like. And because there is not a uniform representation frame, different information judgment methods are different, and there are problems that the amount of computation is large, the identification is slow, and the judgment results can not be uniform when they are divergent.
Disclosure of Invention
In order to solve the problems of low identification accuracy rate of traffic dangerous scenes, high technical difficulty and no unified frame for representing the traffic scenes in the prior art, the invention aims to provide a method and a system for identifying urban traffic environment dangerous scenes based on graph classification.
The purpose of the invention is realized by the following technical scheme:
the invention discloses a method for identifying dangerous scenes of urban traffic environment based on graph classification, which comprises the following steps:
and extracting driving characteristic parameters according to the collected driver operation information, and collecting the traffic scene information around the vehicle by using a monocular camera and a laser radar.
Extracting dynamic and static characteristics of the traffic scene according to the collected traffic scene information, and representing the dynamic and static characteristics of the traffic scene as an undirected graph with a node label by using a graph method according to the collected dynamic and static characteristics of the traffic scene; the complex traffic scene is represented under a uniform frame through the undirected graph representation method with the node labels, namely the traffic scene is represented by establishing the uniform frame through the undirected graph representation method with the node labels, the complexity of the traffic scene is simplified, and key traffic scene information is extracted.
When the dangerous scene label for offline training is obtained, the dangerous scene label of the undirected graph with the node label used by the training classifier is obtained by clustering according to the driving operation information and the vehicle driving information, and a method for obtaining the dangerous scene label of the undirected graph with the node label according to the driving operation information and the vehicle driving information by clustering is defined as a dangerous scene label generation method. The dangerous scene label generation method can better accord with the distribution characteristics of data, and the dangerous scene in the traffic scene is accurately identified through the driving information; meanwhile, the method for generating the dangerous scene label obtains the personalized evaluation of the driver or the unmanned system on the traffic scene, is beneficial to learning the personalized driving style of the driver or the unmanned system, and provides personalized driving assistance.
When the traffic scene danger level is identified, the classifier is trained according to the generated node label undirected graph and the danger scene label, the traffic scene danger level is identified, the complex danger traffic scene identification problem is converted into the graph classification problem with lower complexity, the operation speed and the danger scene identification accuracy are improved, and the operation resources are saved.
The method also comprises the steps of improving the driving performance by applying the method for recognizing the dangerous scene of the urban traffic environment based on the map classification: the urban traffic environment dangerous scene recognition is realized by using map-based classification, dangerous scene labels are obtained according to the clustering of the driving operation information and the vehicle driving information, the labels which are more in line with the data distribution characteristics are generated, the dangerous scenes in the traffic scenes are accurately recognized through the driving information, the recognition accuracy of the traffic dangerous scenes is improved, the recognized traffic dangerous scenes are more in line with the actual driving environment, the learning of the personalized driving style of a driver or an unmanned driving system is facilitated, the personalized driving assistance is provided, and the driving environment adaptability and the driving safety are improved.
Furthermore, the operation information of a driver is collected through a vehicle CAN bus, and the vehicle running information of the driven vehicle is collected through an equipment sensor arranged on the vehicle, so that the operation data collection is more accurate.
Furthermore, the method for acquiring the traffic scene information around the vehicle by using the monocular camera and the laser radar specifically comprises the following steps,
the method comprises the steps that traffic scene image information under the visual angle of a driver is collected through a monocular camera erected on a front vehicle window, the collected image information of the visual angle of the driver is helpful for understanding the decision basis of judging dangerous scenes by the driver, and point cloud information of traffic scenes around a vehicle is collected through a laser radar erected on the roof of the vehicle;
according to the obtained traffic scene image and point cloud information, the traffic scene information around the vehicle is extracted through a preset multi-sensor information fusion program and a target identification program, and the Karman filtering is preferably used as the multi-sensor information fusion program and the YoloV3 is preferably used as the target identification program.
The traffic scene comprises at least one of a scene that a vehicle is in front in the lane of the vehicle, a scene that a vehicle is in the right lane of the vehicle and a scene that a vehicle is in the left lane of the vehicle;
the traffic scene information comprises marking frame information of surrounding vehicles, lane line information, traffic scene image information, distance information of the surrounding vehicles relative to the vehicle and traffic scene point cloud information.
Further, extracting the dynamic and static characteristics of the traffic scene according to the collected traffic scene information specifically comprises,
extracting static characteristics of a traffic scene according to the collected traffic scene information, dividing a traffic scene image into 5x4 grid areas according to the extracted lane line information, and determining the grid position of the vehicle according to the marking frame information of the surrounding vehicles to obtain (x, y) coordinates of the vehicle.
And extracting dynamic characteristics of the traffic scene according to the acquired traffic scene information, calculating the speed relative to the vehicle according to the acquired distance information of the surrounding vehicles relative to the vehicle, and obtaining the absolute speeds of all vehicles in the traffic scene according to the vehicle speed in the running characteristic parameters of the vehicle.
Further, the static characteristics of the traffic scene are extracted, and the specific implementation method comprises the following steps:
transversely dividing the traffic scene image into five regions according to the extracted lane line information; the five regions comprise three lanes and two lane lines; according to the lane line information marked on the image, taking a road vanishing point as a terminal point, longitudinally dividing the traffic scene image into three areas, and thus dividing the road surface into 5x4 grids; the 5x4 grid comprises five areas divided transversely on the road surface, three areas divided longitudinally on the road surface and an area where the vehicle is located, the vertex at the lower left corner of the grid is a zero point, the horizontal axis is an x axis, and the vertical axis is a y axis, so that 5x4 grid coordinates are obtained. And determining the position and the (x, y) coordinate of the grid to which the vehicle belongs according to the grid where the bottom edge of the marking frame of the surrounding vehicle is located, wherein the determined position and the (x, y) coordinate of the grid to which the vehicle belongs are the extracted static characteristics of the traffic scene.
Further, in order to solve the problem of the number of lanes in different scenes being inconsistent, the position of the vehicle is always fixed at the coordinates (3,1) of the 5 × 4 grid, and the lanes in the traffic scene and the vehicles around the vehicle are projected on the grid coordinates.
Further, a graph method is used to represent the traffic scene dynamic and static characteristics collected as an undirected graph with node labels, and the specific implementation method comprises the following steps:
defining vehicles (including the vehicle) in each acquired frame traffic scene information sequence as nodes, and defining the distance between grid positions to which the vehicles belong as edges, so as to obtain an undirected graph of each frame traffic scene sequence; extracting vehicle absolute speed from the traffic scene dynamic characteristics, clustering the vehicle absolute speed in a preset self-supervision clustering algorithm to obtain a speed clustering label, and encoding the speed clustering label with (x, y) coordinates of the vehicle to further obtain a node label; and obtaining an undirected graph with a node label of each frame of traffic scene information sequence according to the undirected graph and the node label, and realizing that the undirected graph with the node label is represented by using a graph method according to the collected dynamic and static characteristics of the traffic scene, thereby representing the complex traffic scene under a uniform frame, namely establishing the uniform frame to represent the traffic scene by using the undirected graph representation method with the node label, and simplifying the complexity of the traffic scene.
Further, according to the generated undirected graph with the node labels in the traffic scene, the traffic scene danger level is identified, and the specific implementation method comprises the following steps:
classifying in a preset dangerous scene recognition classifier according to the obtained undirected graph with the node labels, and recognizing the dangerous level of the traffic scene according to the classification result given by the classifier.
Further, the preset dangerous scene recognition classifier is established, and the specific implementation method is as follows:
in an off-line stage before identification, extracting vehicle running characteristic parameters from the operation information, clustering in a preset self-supervision clustering algorithm according to the vehicle running characteristic parameters, and taking a clustering result as a traffic scene danger level label corresponding to the vehicle running characteristic parameters;
meanwhile, according to the collected traffic scene information, the undirected graph with the node labels is obtained;
training a dangerous scene recognition classifier in advance through the relation between an undirected graph and the dangerous level of a traffic scene;
wherein the vehicle operating characteristic parameters include, but are not limited to: vehicle acceleration, steering wheel angle, steering wheel angular acceleration.
The invention discloses a dangerous scene recognition system based on graph classification, which is used for realizing the dangerous scene recognition method based on graph classification and comprises a data acquisition module, a traffic scene feature extraction module, a graph scene representation module and a dangerous scene recognition module;
the data acquisition module is used for acquiring the operation information of a driver and the traffic scene information around the vehicle;
the traffic scene feature extraction module is used for extracting dynamic and static features of the traffic scene according to the acquired traffic scene information;
the map scene representation module is used for generating an undirected graph with node labels according to the extracted dynamic and static characteristics of the traffic scene;
and the dangerous scene identification module is used for identifying the dangerous level of the corresponding traffic scene according to the generated undirected graph with the node label.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Has the advantages that:
1. the invention discloses a dangerous scene recognition method and a dangerous scene recognition system based on graph classification, wherein driving characteristic parameters are extracted according to collected driver operation information, the information of a traffic scene around a vehicle is collected by using a monocular camera and a laser radar, the dynamic and static characteristics of the traffic scene are extracted according to the collected traffic scene information, and the acquired dynamic and static characteristics of the traffic scene are expressed as an undirected graph with a node label by using a graph method; the complex traffic scene is represented under a uniform frame through the undirected graph representation method with the node labels, namely, the uniform frame is established to represent the traffic scene through the undirected graph representation method with the node labels, the complexity of the traffic scene is simplified, key traffic scene information is extracted, namely, a dangerous traffic scene identification problem is converted into a graph classification problem with lower complexity, the identification speed and the accuracy of dangerous scene identification are improved, and identification operation resources are saved.
2. The invention discloses a dangerous scene identification method and a dangerous scene identification system based on graph classification, and provides a dangerous scene label generation method.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a schematic flow chart of a dangerous scene recognition method based on graph classification according to the present disclosure;
fig. 2 is a schematic diagram of a dangerous scene recognition system based on graph classification according to the present invention.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example 1:
as shown in fig. 1, the method for identifying dangerous scenes in urban traffic environment based on graph classification disclosed in this embodiment is specifically implemented as follows:
and S101, extracting driving characteristic parameters according to the collected driver operation information, and collecting traffic scene information around the vehicle by using a monocular camera and a laser radar.
Specifically, the operation information of a driver is collected through a vehicle CAN bus, and the vehicle running information of a driving vehicle is collected through a device sensor arranged on the vehicle, wherein the vehicle running information comprises the speed information, the attitude information, the current vehicle condition information, the vehicle running track information and the like of the vehicle. The method comprises the steps that a monocular camera erected on a front vehicle window is used for collecting traffic scene image information under the visual angle of a driver, and a laser radar erected on a vehicle roof is used for collecting point cloud information of traffic scenes around the vehicle; according to the obtained traffic scene image and point cloud information, extracting the traffic scene information (such as marking frame information of surrounding vehicles, lane line information, traffic scene image information, distance information of the surrounding vehicles relative to the vehicle, and traffic scene point cloud information) around the vehicle through a preset multi-sensor information fusion program and a target identification program.
In the embodiment of the invention, the traffic scene comprises at least one of a scene that a vehicle is in front in the lane of the vehicle, a scene that a vehicle is in the lane on the right side of the vehicle and a scene that a vehicle is in the lane on the left side of the vehicle.
Step S102, extracting dynamic and static characteristics of a traffic scene according to collected traffic scene information, and representing the dynamic and static characteristics of the traffic scene as an undirected graph with a node label by using a graph method according to the collected dynamic and static characteristics of the traffic scene; the complex traffic scene is represented under a uniform frame through the undirected graph representation method with the node labels, namely the traffic scene is represented by establishing the uniform frame through the undirected graph representation method with the node labels, the complexity of the traffic scene is simplified, and key traffic scene information is extracted.
During specific implementation, extracting static characteristics of a traffic scene according to the collected traffic scene information, and transversely dividing a traffic scene image into five regions according to the extracted lane line information; the five regions comprise three lanes and two lane lines and are marked on each frame of image; according to the lane line information marked on the image, taking a road vanishing point as a terminal point, longitudinally dividing the traffic scene image into three areas, and thus dividing the road surface into 5x4 grids; the 5x4 grid comprises five areas divided transversely on the road surface, three areas divided longitudinally on the road surface and an area where the vehicle is located, the vertex of the left lower corner of the grid is a zero point, the horizontal axis is an x axis, and the vertical axis is a y axis, so that the 5x4 grid coordinates are obtained. And determining the position and the (x, y) coordinate of the grid to which the vehicle belongs according to the grid where the bottom edge of the marking frame of the surrounding vehicle is located. And extracting dynamic characteristics of the traffic scene according to the acquired traffic scene information, calculating the speed relative to the vehicle according to the acquired distance information of the surrounding vehicles relative to the vehicle, and obtaining the absolute speeds of all vehicles in the traffic scene according to the vehicle speed in the running characteristic parameters of the vehicle.
S103, representing the traffic scene as an undirected graph with node labels by using a graph method according to the collected dynamic and static characteristics of the traffic scene;
specifically, when a vehicle (including a host vehicle) in each acquired frame of traffic scene information sequence is defined as a node and a distance between grid positions to which the vehicle belongs is defined as an edge, the method includes: and on a 5x4 grid divided according to the information of each frame of traffic scene image, the vertex at the lower left corner of the grid is a zero point, the horizontal axis is the x axis, and the vertical axis is the y axis, so that the coordinates of the 5x4 grid are obtained. The vehicle (including the host vehicle) is defined as a node, and the (x, y) coordinates are determined according to the grid where the bottom edge of the labeling frame is located, and the host vehicle is always located at the (3,1) coordinates because no labeling frame information exists. And taking each vehicle as a center, judging whether vehicles exist in the front and back grids and the left and right grids, and defining the connection of the two vehicles with edges if the vehicles exist, thereby generating a traffic scene undirected graph according to the definition.
Extracting vehicle absolute speed from the traffic scene dynamic characteristics, clustering the vehicle absolute speed in an automatic supervision clustering algorithm, such as a GMM algorithm, to obtain a speed clustering label c, and arranging and coding the speed clustering label c with an x coordinate and a y coordinate of a vehicle to further obtain a node label; and obtaining an undirected graph G (t) with a node label of each frame of traffic scene information sequence according to the undirected graph and the node label.
And step S104, identifying the danger level of the traffic scene according to the generated undirected graph with the node labels in the traffic scene.
Specifically, classification is carried out in a preset dangerous scene identification classifier according to the obtained undirected graph G (t) with the node labels, and the dangerous grade of the traffic scene is identified according to the classification result given by the classifier.
In this embodiment, the risk levels include, but are not limited to: no risk, mild risk and severe risk.
The establishing process of the preset classifier specifically comprises the following steps: collecting driver's information within preset time of off-line stateClustering the collected operation information and characteristic parameters by using a preset self-supervision clustering algorithm (such as GMM algorithm) according to the operation information and the running characteristic parameters of the vehicle (such as steering wheel angular acceleration omega (t) and vehicle acceleration a (t)), wherein the clustering result is defined as that the traffic scene is not dangerous
Figure BDA0002793201350000061
Mild risk
Figure BDA0002793201350000062
Severe danger
Figure BDA0002793201350000063
Three conditions are used as traffic scene danger level labels corresponding to the operation information and the running characteristic parameters of the vehicle; meanwhile, according to traffic scene information collected within preset time, obtaining the undirected graph G (t) with the node labels corresponding to each frame as training data; marking a traffic scene danger level label corresponding to the training data; and learning and training the training data under different labels by using a preset classification algorithm, such as a Support Vector Machine (SVM), so as to form a preset classifier.
The process of identifying the traffic scene danger level corresponding to the relevant traffic scene information according to the classifier specifically comprises the following steps: using a monocular camera and a laser radar to collect traffic scene information around the vehicle, extracting the undirected graph G (t) with the node labels, inputting the undirected graph G (t) into the classifier, and obtaining the traffic scene danger level
Figure BDA0002793201350000064
According to the method for identifying the dangerous scene of the urban traffic environment based on the graph classification, provided by the embodiment of the invention, the dangerous grade of the traffic scene can be identified by actively acquiring the traffic scene information around the vehicle and uniformly representing the dynamic and static characteristics of the traffic scene through a graph method; the method can solve the problems of low identification accuracy rate of the traffic danger scene, high technical difficulty and lack of unified frame representation in the prior art, improves the identification accuracy of the traffic danger scene, and has strong environmental adaptability, so that the identified traffic danger scene is more in line with the actual driving environment.
Example 2
As shown in fig. 2, the present embodiment discloses a system for identifying dangerous scenes in an urban traffic environment based on graph classification, which specifically includes a data acquisition module, a traffic scene feature extraction module, a scene representation module, and a dangerous scene identification module;
the data acquisition module is used for extracting driving characteristic parameters according to the acquired driver operation information and acquiring the traffic scene information around the vehicle by using a monocular camera and a laser radar;
the data acquisition module acquires operation information of a driver through a CAN bus, and acquires vehicle running information of a driven vehicle through an equipment sensor arranged on the vehicle, wherein the vehicle running information comprises speed information, attitude information, current vehicle condition information, vehicle running track information and the like of the vehicle. The method comprises the steps that a monocular camera erected on a front vehicle window is used for collecting traffic scene image information under the visual angle of a driver, and a laser radar erected on a vehicle roof is used for collecting point cloud information of traffic scenes around the vehicle; according to the obtained traffic scene image and point cloud information, extracting the traffic scene information (such as marking frame information of surrounding vehicles, lane line information, traffic scene image information, distance information of the surrounding vehicles relative to the vehicle, and traffic scene point cloud information) around the vehicle through a preset multi-sensor information fusion program and a target identification program.
In the embodiment of the invention, the traffic scene comprises at least one of a scene that a vehicle is in front in the lane of the vehicle, a scene that a vehicle is in the lane on the right side of the vehicle and a scene that a vehicle is in the lane on the left side of the vehicle.
And the traffic scene feature extraction module is used for extracting dynamic and static features of the traffic scene according to the acquired traffic scene information.
The traffic scene characteristic extraction module extracts static characteristics of a traffic scene according to the acquired traffic scene information and transversely divides a traffic scene image into five areas according to the extracted lane line information; the five regions comprise three lanes and two lane lines and are marked on each frame of image; according to the lane line information marked on the image, taking a road vanishing point as a terminal point, longitudinally dividing the traffic scene image into three areas, and thus dividing the road surface into 5x4 grids; the 5x4 grid comprises five areas divided transversely on the road surface, three areas divided longitudinally on the road surface and an area where the vehicle is located, the vertex of the left lower corner of the grid is a zero point, the horizontal axis is an x axis, and the vertical axis is a y axis, so that the 5x4 grid coordinates are obtained. And determining the position and the (x, y) coordinate of the grid to which the vehicle belongs according to the grid where the bottom edge of the marking frame of the surrounding vehicle is located. And extracting dynamic characteristics of the traffic scene according to the acquired traffic scene information, calculating the speed relative to the vehicle according to the acquired distance information of the surrounding vehicles relative to the vehicle, and obtaining the absolute speeds of all vehicles in the traffic scene according to the vehicle speed in the running characteristic parameters of the vehicle.
The system comprises a graph scene representation module, a node label module and a node label module, wherein the graph scene representation module is used for representing an undirected graph with a node label according to collected dynamic and static characteristics of a traffic scene by using a graph method;
when the map scene representation module defines the vehicles (including the own vehicle) in each acquired frame of traffic scene information sequence as nodes and the distance between the grid positions to which the vehicles belong as edges, the method comprises the following steps: and on a 5x4 grid divided according to the information of each frame of traffic scene image, the vertex at the lower left corner of the grid is a zero point, the horizontal axis is the x axis, and the vertical axis is the y axis, so that the coordinates of the 5x4 grid are obtained. The vehicle (including the host vehicle) is defined as a node, and the (x, y) coordinates are determined according to the grid where the bottom edge of the labeling frame is located, and the host vehicle is always located at the (3,1) coordinates because no labeling frame information exists. And taking each vehicle as a center, judging whether vehicles exist in the front and back grids and the left and right grids, and defining the connection of the two vehicles with edges if the vehicles exist, thereby generating a traffic scene undirected graph according to the definition.
Extracting vehicle absolute speed from the traffic scene dynamic characteristics, clustering the vehicle absolute speed in an automatic supervision clustering algorithm, such as a GMM algorithm, to obtain a speed clustering label c, and arranging and coding the speed clustering label c with an x coordinate and a y coordinate of a vehicle to further obtain a node label; and obtaining an undirected graph G (t) with a node label of each frame of traffic scene information sequence according to the undirected graph and the node label.
The dangerous scene identification module is used for identifying the dangerous grade of the traffic scene according to the generated undirected graph with the node labels in the traffic scene;
and the dangerous scene identification module classifies in a preset dangerous scene identification classifier according to the obtained undirected graph G (t) with the node labels, and identifies the dangerous level of the traffic scene according to the classification result given by the classifier.
In the embodiment of the present invention, the risk levels include, but are not limited to: no risk, mild risk and severe risk.
The establishing process of the preset classifier specifically comprises the following steps: collecting driver's operation information and vehicle running characteristic parameters (such as steering wheel angular acceleration omega (t) and vehicle acceleration a (t)) in preset time of off-line state, clustering the collected operation information and characteristic parameters by using preset self-supervision clustering algorithm (such as GMM algorithm), and defining the clustering result as traffic scene without danger
Figure BDA0002793201350000081
Mild risk
Figure BDA0002793201350000082
Severe danger
Figure BDA0002793201350000083
Three conditions are used as traffic scene danger level labels corresponding to the operation information and the running characteristic parameters of the vehicle; meanwhile, according to traffic scene information collected within preset time, obtaining the undirected graph G (t) with the node labels corresponding to each frame as training data; marking a traffic scene danger level label corresponding to the training data; and learning and training the training data under different labels by using a preset classification algorithm, such as a Support Vector Machine (SVM), so as to form a preset classifier.
The process of identifying the traffic scene danger level corresponding to the relevant traffic scene information according to the classifier specifically comprises the following steps: using a monocularThe camera and the laser radar collect the traffic scene information around the vehicle, the extracted undirected graph G (t) with the node labels is input into the classifier, and the traffic scene danger level is obtained
Figure BDA0002793201350000084
According to the urban traffic environment dangerous scene recognition system based on graph classification, provided by the embodiment of the invention, the dangerous level of a traffic scene can be recognized by actively acquiring the traffic scene information around the vehicle and uniformly representing the dynamic and static characteristics of the traffic scene through a graph method; the embodiment of the invention can solve the problems of low identification accuracy rate of the traffic danger scene, great technical difficulty and lack of unified frame representation in the prior art, improves the identification accuracy of the traffic danger scene and has strong environmental adaptability, so that the identified traffic danger scene is more in line with the actual driving environment.
It should be noted that the same or similar parts may be referred to each other between the above embodiments. Especially for the system embodiment, since it is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium, to instruct related hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for recognizing dangerous scenes of urban traffic environment based on graph classification is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
extracting driving characteristic parameters according to the collected driver operation information, and collecting traffic scene information around the vehicle by using a monocular camera and a laser radar;
extracting dynamic and static characteristics of the traffic scene according to the collected traffic scene information, and representing the dynamic and static characteristics of the traffic scene as an undirected graph with a node label by using a graph method according to the collected dynamic and static characteristics of the traffic scene; the complex traffic scene is represented under a uniform frame by the undirected graph representation method with the node labels, namely the traffic scene is represented by the uniform frame established by the undirected graph representation method with the node labels, the complexity of the traffic scene is simplified, and key traffic scene information is extracted;
when a dangerous scene label for off-line training is obtained, clustering the dangerous scene label of the undirected graph with the node label used by the training classifier according to the driving operation information and the vehicle driving information, and defining a method for obtaining the dangerous scene label of the undirected graph with the node label according to the clustering of the driving operation information and the vehicle driving information as a dangerous scene label generation method; the dangerous scene label generation method can better accord with the distribution characteristics of data, and the dangerous scene in the traffic scene is accurately identified through the driving information; meanwhile, the method for generating the dangerous scene label obtains the personalized evaluation of the driver or the unmanned system on the traffic scene, is beneficial to learning the personalized driving style of the driver or the unmanned system and provides personalized driving assistance;
when the traffic scene danger level is identified, the classifier is trained according to the generated node label undirected graph and the danger scene label, the traffic scene danger level is identified, the complex danger traffic scene identification problem is converted into the graph classification problem with lower complexity, the operation speed and the danger scene identification accuracy are improved, and the operation resources are saved.
2. The method for recognizing dangerous scene of urban traffic environment based on map classification as claimed in claim 1, wherein: the method comprises the steps of obtaining a dangerous scene label according to driving operation information and vehicle driving information, generating a label more consistent with data distribution characteristics, accurately identifying the dangerous scene in the traffic scene through driving information, improving the identification accuracy of the dangerous scene, enabling the identified dangerous scene to be more consistent with the actual driving environment, being beneficial to learning the individualized driving style of a driver or an unmanned driving system, providing individualized driving assistance, and improving the driving environment adaptability and the driving safety.
3. The method for recognizing dangerous scenes of urban traffic environments based on graph classification as claimed in claim 1 or 2, wherein: the operation information of a driver is acquired through a vehicle CAN bus, and the vehicle running information of a driving vehicle is acquired through an equipment sensor arranged on the vehicle, so that the operation data acquisition is more accurate;
the method for acquiring the traffic scene information around the vehicle by using the monocular camera and the laser radar specifically comprises the following steps,
the traffic scene image information under the visual angle of the driver is collected through the monocular camera erected on the front vehicle window, the collected image information of the visual angle of the driver is helpful for understanding the decision basis of judging dangerous scenes by the driver, and the point cloud information of the traffic scene around the vehicle is collected through the laser radar erected on the roof.
4. The method for recognizing dangerous scene of urban traffic environment based on map classification as claimed in claim 3, wherein:
extracting traffic scene information around the vehicle through a preset multi-sensor information fusion program and a target identification program according to the obtained traffic scene image and point cloud information, and using Kalman filtering as the multi-sensor information fusion program and using YoloV3 as the target identification program;
the traffic scene comprises at least one of a scene that a vehicle is in front in the lane of the vehicle, a scene that a vehicle is in the right lane of the vehicle and a scene that a vehicle is in the left lane of the vehicle;
the traffic scene information comprises marking frame information of surrounding vehicles, lane line information, traffic scene image information, distance information of the surrounding vehicles relative to the vehicle and traffic scene point cloud information.
5. The method for recognizing dangerous scenes of urban traffic environments based on graph classification as claimed in claim 1 or 2, wherein: the method for extracting the dynamic and static characteristics of the traffic scene according to the collected traffic scene information specifically comprises the following steps,
extracting static characteristics of a traffic scene according to the acquired traffic scene information, dividing a traffic scene image into 5x4 grid areas according to the extracted lane line information, and determining the grid position of the vehicle according to the marking frame information of the surrounding vehicles to obtain (x, y) coordinates of the vehicle;
and extracting dynamic characteristics of the traffic scene according to the acquired traffic scene information, calculating the speed relative to the vehicle according to the acquired distance information of the surrounding vehicles relative to the vehicle, and obtaining the absolute speeds of all vehicles in the traffic scene according to the vehicle speed in the running characteristic parameters of the vehicle.
6. The method for recognizing dangerous scenes of urban traffic environments based on graph classification as claimed in claim 1 or 2, wherein: the static characteristics of the traffic scene are extracted by the concrete realization method,
transversely dividing the traffic scene image into five regions according to the extracted lane line information; the five regions comprise three lanes and two lane lines; according to the lane line information marked on the image, taking a road vanishing point as a terminal point, longitudinally dividing the traffic scene image into three areas, and thus dividing the road surface into 5x4 grids; the 5x4 grid comprises five transversely divided areas of the road surface, three longitudinally divided areas of the road surface and an area where the vehicle is located, the vertex of the left lower corner of the grid is a zero point, the horizontal axis is an x axis, and the vertical axis is a y axis, so that 5x4 grid coordinates are obtained; and determining the position and the (x, y) coordinate of the grid to which the vehicle belongs according to the grid where the bottom edge of the marking frame of the surrounding vehicle is located, wherein the determined position and the (x, y) coordinate of the grid to which the vehicle belongs are the extracted static characteristics of the traffic scene.
7. The method for recognizing dangerous scenes of urban traffic environments based on graph classification as claimed in claim 1 or 2, wherein: according to the generated undirected graph with the node labels in the traffic scene, the traffic scene danger level is identified by the specific implementation method,
classifying in a preset dangerous scene recognition classifier according to the obtained undirected graph with the node labels, and recognizing the dangerous level of the traffic scene according to the classification result given by the classifier.
8. The method for recognizing dangerous scenes of urban traffic environments based on graph classification as claimed in claim 1 or 2, wherein: establishing the preset dangerous scene recognition classifier, wherein the specific implementation method comprises the following steps,
in an off-line stage before identification, extracting vehicle running characteristic parameters from the operation information, clustering in a preset self-supervision clustering algorithm according to the vehicle running characteristic parameters, and taking a clustering result as a traffic scene danger level label corresponding to the vehicle running characteristic parameters;
meanwhile, according to the collected traffic scene information, the undirected graph with the node labels is obtained;
training a dangerous scene recognition classifier in advance through the relation between an undirected graph and the dangerous level of a traffic scene;
wherein the vehicle operating characteristic parameters include, but are not limited to: vehicle acceleration, steering wheel angle, steering wheel angular acceleration.
9. The method for recognizing dangerous scenes of urban traffic environments based on graph classification as claimed in claim 1 or 2, wherein: according to the collected dynamic and static characteristics of the traffic scene, a graph method is used to represent an undirected graph with node labels, and the concrete implementation method is that,
defining vehicles (including the vehicle) in each acquired frame traffic scene information sequence as nodes, and defining the distance between grid positions to which the vehicles belong as edges, so as to obtain an undirected graph of each frame traffic scene sequence; extracting vehicle absolute speed from the traffic scene dynamic characteristics, clustering the vehicle absolute speed in a preset self-supervision clustering algorithm to obtain a speed clustering label, and encoding the speed clustering label with (x, y) coordinates of the vehicle to further obtain a node label; and obtaining an undirected graph with a node label of each frame of traffic scene information sequence according to the undirected graph and the node label, and realizing that the undirected graph with the node label is represented by using a graph method according to the collected dynamic and static characteristics of the traffic scene, thereby representing the complex traffic scene under a uniform frame, namely establishing the uniform frame to represent the traffic scene by using the undirected graph representation method with the node label, and simplifying the complexity of the traffic scene.
10. A system for recognizing dangerous scenes of traffic environments in urban areas based on graph classification, which is used for realizing the method for recognizing dangerous scenes of traffic environments in urban areas based on graph classification as claimed in claim 1 or 2, and is characterized in that: the system comprises a data acquisition module, a traffic scene feature extraction module, a picture scene representation module and a dangerous scene identification module;
the data acquisition module is used for acquiring the operation information of a driver and the traffic scene information around the vehicle;
the traffic scene feature extraction module is used for extracting dynamic and static features of the traffic scene according to the acquired traffic scene information;
the map scene representation module is used for generating an undirected graph with node labels according to the extracted dynamic and static characteristics of the traffic scene;
and the dangerous scene identification module is used for identifying the dangerous level of the corresponding traffic scene according to the generated undirected graph with the node label.
CN202011326019.7A 2020-11-23 2020-11-23 Dangerous scene identification method and system based on graph classification Active CN112487907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011326019.7A CN112487907B (en) 2020-11-23 2020-11-23 Dangerous scene identification method and system based on graph classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011326019.7A CN112487907B (en) 2020-11-23 2020-11-23 Dangerous scene identification method and system based on graph classification

Publications (2)

Publication Number Publication Date
CN112487907A true CN112487907A (en) 2021-03-12
CN112487907B CN112487907B (en) 2022-12-20

Family

ID=74933287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011326019.7A Active CN112487907B (en) 2020-11-23 2020-11-23 Dangerous scene identification method and system based on graph classification

Country Status (1)

Country Link
CN (1) CN112487907B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239901A (en) * 2021-06-17 2021-08-10 北京三快在线科技有限公司 Scene recognition method, device, equipment and storage medium
CN114104000A (en) * 2021-12-16 2022-03-01 智己汽车科技有限公司 Dangerous scene evaluation and processing system, method and storage medium
CN117593717A (en) * 2024-01-18 2024-02-23 武汉大学 Lane tracking method and system based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781476A (en) * 2016-12-22 2017-05-31 中国人民解放军第三军医大学第三附属医院 Vehicle dynamic position analysis method in traffic accident
CN107609483A (en) * 2017-08-15 2018-01-19 中国科学院自动化研究所 Risk object detection method, device towards drive assist system
US20180307967A1 (en) * 2017-04-25 2018-10-25 Nec Laboratories America, Inc. Detecting dangerous driving situations by parsing a scene graph of radar detections
CN110009765A (en) * 2019-04-15 2019-07-12 合肥工业大学 A kind of automatic driving vehicle contextual data system and scene format method for transformation
CN111179585A (en) * 2018-11-09 2020-05-19 上海汽车集团股份有限公司 Site testing method and device for automatic driving vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781476A (en) * 2016-12-22 2017-05-31 中国人民解放军第三军医大学第三附属医院 Vehicle dynamic position analysis method in traffic accident
US20180307967A1 (en) * 2017-04-25 2018-10-25 Nec Laboratories America, Inc. Detecting dangerous driving situations by parsing a scene graph of radar detections
CN107609483A (en) * 2017-08-15 2018-01-19 中国科学院自动化研究所 Risk object detection method, device towards drive assist system
CN111179585A (en) * 2018-11-09 2020-05-19 上海汽车集团股份有限公司 Site testing method and device for automatic driving vehicle
CN110009765A (en) * 2019-04-15 2019-07-12 合肥工业大学 A kind of automatic driving vehicle contextual data system and scene format method for transformation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARVIN TEICHMANN, ET AL: "MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving", 《2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM》 *
孙博华等: "虚拟随机车路场下驾驶人驾驶能力机理分析", 《机械工程学报》 *
张建朋等: "基于因子图模型的动态图半监督聚类算法", 《自动化学报》 *
郭景华等: "基于危险场景聚类分析的前车随机运动状态预测研究", 《汽车工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239901A (en) * 2021-06-17 2021-08-10 北京三快在线科技有限公司 Scene recognition method, device, equipment and storage medium
CN113239901B (en) * 2021-06-17 2022-09-27 北京三快在线科技有限公司 Scene recognition method, device, equipment and storage medium
CN114104000A (en) * 2021-12-16 2022-03-01 智己汽车科技有限公司 Dangerous scene evaluation and processing system, method and storage medium
CN114104000B (en) * 2021-12-16 2024-04-12 智己汽车科技有限公司 Dangerous scene evaluation and processing system, method and storage medium
CN117593717A (en) * 2024-01-18 2024-02-23 武汉大学 Lane tracking method and system based on deep learning
CN117593717B (en) * 2024-01-18 2024-04-05 武汉大学 Lane tracking method and system based on deep learning

Also Published As

Publication number Publication date
CN112487907B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
CN112487907B (en) Dangerous scene identification method and system based on graph classification
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
US20140236463A1 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
CN105892471A (en) Automatic automobile driving method and device
KR101822373B1 (en) Apparatus and method for detecting object
CN112700470A (en) Target detection and track extraction method based on traffic video stream
CN110458050B (en) Vehicle cut-in detection method and device based on vehicle-mounted video
CN105718872A (en) Auxiliary method and system for rapid positioning of two-side lanes and detection of deflection angle of vehicle
CN103366179A (en) Top-down view classification in clear path detection
CN112487905A (en) Method and system for predicting danger level of pedestrian around vehicle
CN114155720B (en) Vehicle detection and track prediction method for roadside laser radar
Tanaka et al. Vehicle detection based on perspective transformation using rear-view camera
Rasib et al. Pixel level segmentation based drivable road region detection and steering angle estimation method for autonomous driving on unstructured roads
CN110210384B (en) Road global information real-time extraction and representation system
CN110765224A (en) Processing method of electronic map, vehicle vision repositioning method and vehicle-mounted equipment
US20230245323A1 (en) Object tracking device, object tracking method, and storage medium
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
CN114743179A (en) Panoramic visible driving area detection method based on semantic segmentation
CN114895274A (en) Guardrail identification method
Prakash et al. Multiple Objects Identification for Autonomous Car using YOLO and CNN
CN114677658A (en) Billion-pixel dynamic large-scene image acquisition and multi-target detection method and device
Malik High-quality vehicle trajectory generation from video data based on vehicle detection and description
Nalavde et al. Driver assistant services using ubiquitous smartphone
Rosebrock et al. Real-time vehicle detection with a single camera using shadow segmentation and temporal verification
Hazelhoff et al. Combined generation of road marking and road sign databases applied to consistency checking of pedestrian crossings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant