CN113538193A - Traffic accident handling method and system based on artificial intelligence and computer vision - Google Patents

Traffic accident handling method and system based on artificial intelligence and computer vision Download PDF

Info

Publication number
CN113538193A
CN113538193A CN202110735377.1A CN202110735377A CN113538193A CN 113538193 A CN113538193 A CN 113538193A CN 202110735377 A CN202110735377 A CN 202110735377A CN 113538193 A CN113538193 A CN 113538193A
Authority
CN
China
Prior art keywords
traffic accident
accident
image
initial
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110735377.1A
Other languages
Chinese (zh)
Inventor
黄红星
付少新
陈少红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Green Light Network Technology Co ltd
Original Assignee
Dongguan Green Light Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Green Light Network Technology Co ltd filed Critical Dongguan Green Light Network Technology Co ltd
Priority to CN202110735377.1A priority Critical patent/CN113538193A/en
Publication of CN113538193A publication Critical patent/CN113538193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a traffic accident processing method and system based on artificial intelligence and computer vision, which comprises the steps of obtaining identity information of a starting person for starting traffic accident processing, determining initial accident vehicles according to obtained traffic accident images after obtaining the identity information, then obtaining target areas of the traffic accident images, carrying out ground lane line recognition on the target areas, determining traffic accident responsibilities of each initial accident vehicle according to the initial accident vehicles and the target ground lane lines, and finally obtaining a traffic accident confirmation data packet by combining license plate numbers of each initial accident vehicle. Therefore, the traffic accident processing method provided by the invention is an automatic processing method, after a traffic accident occurs, a traffic accident responsibility confirmation result of related vehicles can be directly obtained according to data processing without waiting for a traffic police, the traffic accident processing efficiency is improved, the traffic influence degree is reduced, the possibility of causing secondary traffic accidents is further reduced, and the traffic safety is improved.

Description

Traffic accident handling method and system based on artificial intelligence and computer vision
Technical Field
The invention relates to a traffic accident handling method and system based on artificial intelligence and computer vision.
Background
When a traffic accident occurs, a traffic police usually needs to go to the scene to perform traffic accident responsibility division, but a certain time is usually required from the occurrence of the traffic accident to the arrival of the traffic police at the scene, and if the traffic accident occurs at a peak time of commuting, the traffic may be seriously blocked in the process of waiting for the traffic police, and a secondary traffic accident may be caused.
Disclosure of Invention
In order to solve the technical problems, the invention provides a traffic accident handling method and system based on artificial intelligence and computer vision.
The invention adopts the following technical scheme:
a traffic accident handling method based on artificial intelligence and computer vision comprises the following steps:
acquiring identity information of a starting person who starts traffic accident processing;
after the identity information is acquired, at least two traffic accident images are acquired, wherein the traffic accident images comprise at least two vehicles;
processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image;
carrying out consistency comparison on the characteristics of the initial accident vehicles in each traffic accident image, and if the consistency comparison condition is met, acquiring a target area of each traffic accident image, wherein the target area is related to the area occupied by each initial accident vehicle in the traffic accident image;
carrying out ground lane line identification on the target area of each traffic accident image to obtain a target ground lane line in each traffic accident image;
determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image;
acquiring license plate numbers of all initial accident vehicles;
and integrating the license plate number of each initial accident vehicle, the identity information and the confirmed traffic accident responsibility to obtain a traffic accident confirmation data packet.
Optionally, the processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image specifically includes:
acquiring characteristic data of each vehicle, wherein the characteristic data comprises image data of an area occupied by the corresponding vehicle in the corresponding traffic accident image;
respectively combining the feature data of every two vehicles in each vehicle contained in the same traffic accident image to obtain a plurality of feature set data;
classifying the feature set data to obtain target feature set data, and determining two vehicles corresponding to the target feature set data as initial accident vehicles, wherein the target feature set data are feature set data belonging to preset target categories in the feature set data.
Optionally, the classifying the feature set data to obtain target feature set data includes:
converting each feature set data into a feature set matrix;
passing each characteristic set matrix through a preset convolutional neural network to obtain each full-connection layer matrix;
calculating probability values of all feature set matrixes belonging to all preset categories based on the full-connection layer matrix and the preset parameter matrix;
for any one feature set matrix, obtaining the highest probability value, and taking the preset category corresponding to the highest probability value as the category of the feature set data corresponding to the feature set matrix;
and acquiring feature set data belonging to a preset target category in the categories of the feature set data to obtain the target feature set data.
Optionally, the characteristic of the initial accident vehicle in each traffic accident image is a color characteristic;
the consistency comparison of the characteristics of the initial accident vehicles in the traffic accident images comprises the following steps:
and identifying the color of each initial accident vehicle in each traffic accident image, generating a color set corresponding to each traffic accident image, comparing whether the corresponding color sets in each traffic accident image are consistent or not, and if so, indicating that a consistency comparison condition is met.
Optionally, the performing ground lane line identification on the target area of each traffic accident image to obtain a target ground lane line in each traffic accident image includes:
acquiring a target area image of a target area of a traffic accident image;
identifying a target object in the target area image, which is different from the background of the target area image;
determining an expression of each target object according to the relative position of the target object in the target area image;
inputting the expression of each target object into a preset ground lane line identification database, and acquiring the expression corresponding to the ground lane line to obtain the target ground lane line; wherein the ground lane line identification database includes at least one expression corresponding to a ground lane line,
optionally, the determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image includes:
calculating the corresponding line passing areas and the corresponding angles of the two initial accident vehicles according to the relative positions of the two initial accident vehicles and the target ground lane line in the traffic accident image, wherein the line passing areas are the areas of the initial accident vehicles exceeding the target ground lane line in the known advancing direction, and the corresponding angles are the included angles of the central axis of the initial accident vehicle and the target ground lane line in the known advancing direction;
determining the corresponding line passing areas and the corresponding traffic accident responsibility classes of the relative angles of the two initial accident vehicles in each traffic accident image according to a preset responsibility class database; wherein the responsibility category database comprises the corresponding relation of the line passing area interval, the relative angle interval and the traffic accident responsibility category,
optionally, after the license plate number of each initial accident vehicle, the identity information, and the determined traffic accident responsibility are integrated to obtain a traffic accident determination data packet, the traffic accident handling method further includes the following steps:
detecting smart phones with started Bluetooth in a preset range, and performing Bluetooth pairing;
and after the Bluetooth pairing is completed, the traffic accident determination data packet is sent to the smart phone.
An artificial intelligence and computer vision based traffic accident handling system comprises a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor executes the computer program to realize the artificial intelligence and computer vision based traffic accident handling method.
The method comprises the steps of firstly acquiring identity information of a starting person who starts traffic accident processing, and carrying out subsequent operation only after the identity information is acquired, so that the reliability of data processing is improved; after the identity information is obtained, at least two traffic accident images are obtained, each traffic accident image comprises at least two vehicles, then the traffic accident images are processed to obtain initial accident vehicles in the traffic accident images, then the characteristics of the initial accident vehicles in each traffic accident image are compared in a consistency mode, if the consistency comparison condition is met, a target area of each traffic accident image is obtained, the target area is related to the area occupied by each initial accident vehicle in the traffic accident images, ground lane line recognition is carried out on the target area of each traffic accident image to obtain a target ground lane line in each traffic accident image, the traffic accident responsibility of each initial accident vehicle is determined according to the initial accident vehicles and the target ground lane lines in each traffic accident image, and finally the license plate number, the position and the position of each initial accident vehicle are combined, And obtaining the traffic accident confirmation data packet by the identity information and the confirmed traffic accident responsibility. Therefore, the traffic accident processing method provided by the invention is an automatic processing method, after a traffic accident occurs, a traffic accident responsibility confirmation result of related vehicles can be directly obtained according to data processing without waiting for a traffic police, the traffic accident processing efficiency is improved, the traffic influence degree is reduced, the possibility of causing secondary traffic accidents is further reduced, and the traffic safety is improved.
Drawings
Fig. 1 is a flow chart of a traffic accident handling method based on artificial intelligence and computer vision according to the present invention.
Detailed Description
The embodiment provides a traffic accident handling method based on artificial intelligence and computer vision, and a hardware execution subject of the traffic accident handling method can be an intelligent mobile terminal, such as a smart phone. As shown in fig. 1, the traffic accident handling method includes:
step 1: acquiring identity information of a starting person who starts traffic accident handling:
after a traffic accident occurs, it is necessary to start traffic accident handling. The hardware execution main body acquires identity information of a starting person starting traffic accident processing, and the starting person can be any party of the traffic accident. As a specific implementation manner, when the APP corresponding to the traffic accident handling method is opened, an identity information collecting interface is present for collecting identity information of a starting person, such as fingerprint information, face image information, and the like. After the identity information of the starting personnel is acquired, the subsequent traffic accident handling process can be carried out.
Step 2: after the identity information is acquired, at least two traffic accident images are acquired, wherein the traffic accident images comprise at least two vehicles:
after the identity information of the starting personnel is acquired, at least two traffic accident images are acquired, and the specific number of the traffic accident images is set according to actual needs. The traffic accident image is shot by the camera.
Since the traffic accident is usually generated by two vehicles, in the present embodiment, at least two vehicles are included in the traffic accident image. The inclusion of at least two vehicles in the traffic accident image is due to: if the photographing distance of the traffic accident image is relatively far, other normal vehicles may be included in addition to the two vehicles involved in the traffic accident.
It should be appreciated that existing object detection algorithms may be employed to identify individual vehicles in each traffic accident image.
And step 3: processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image:
and processing the traffic accident image to obtain initial accident vehicles in the traffic accident image, wherein two initial accident vehicles are understood. As a specific embodiment, a specific process is given below:
(1) and acquiring characteristic data of each vehicle, wherein the characteristic data comprises image data of an area occupied by the corresponding vehicle in the corresponding traffic accident image. Therefore, the image data is position data of the area occupied by the corresponding vehicle in the corresponding traffic accident image, and the feature data of each vehicle is determined according to the position of the vehicle in the corresponding traffic accident image. As one specific embodiment, a boundary box for defining a corresponding vehicle may be set for each vehicle. The setting mode of the bounding box belongs to the conventional technical means and is not described in detail. Of course, the area occupied by the vehicle in the corresponding traffic accident image does not exceed the range defined by the boundary frame of the vehicle, and then the image data of the vehicle includes the coordinates of the center point of the corresponding boundary frame and each point of each side of the boundary frame in the corresponding traffic accident image, the area of the boundary frame, and the like.
(2) And respectively combining the feature data of every two vehicles in each vehicle contained in the same traffic accident image to obtain a plurality of feature set data. The feature set data is a combination of feature data of two corresponding vehicles. As a specific embodiment, if the traffic accident image includes 3 vehicles, there are 3 combinations in total, each of which is: three feature set data may be obtained for a first vehicle and a second vehicle, a second vehicle and a third vehicle, and a first vehicle and a third vehicle.
(3) And classifying the feature set data to obtain target feature set data, namely screening the feature set data to obtain required feature set data according to a classification result. As a specific embodiment, a specific process is given below:
and converting each feature set data into a feature set matrix by adopting a preset conversion algorithm for converting the data into the matrix. It should be understood that the feature set data includes a plurality of data, and thus, the preset conversion manner of converting the data into the matrix involves the corresponding relationship between the various data and the matrix data.
And (4) passing each characteristic set matrix through a preset convolutional neural network to obtain each full-connection layer matrix. The convolutional neural network is obtained by training a plurality of training data in advance, and the training data comprises a feature set training matrix and a category corresponding to the feature set training matrix. The convolutional neural network includes convolutional layers, pooling layers, and fully-connected layers. And calculating each characteristic set matrix sequentially through the convolutional layer, the pooling layer and the fully-connected layer to obtain each fully-connected layer matrix.
And calculating the probability value of each characteristic set matrix belonging to each preset category based on the full connection layer matrix and the preset parameter matrix. As a specific embodiment, the following calculation formula is adopted for calculation:
Figure BDA0003141407140000061
wherein σ (k | j) is the probability score of the feature set matrix k belonging to the preset category j, zkFor the full-link layer matrix, x, corresponding to the feature set matrix kjA preset corresponding to the preset category jParameter matrix, xiAnd E is a natural constant, wherein M is a preset parameter matrix corresponding to the preset category i, M is the total number of the preset categories.
Therefore, by the above calculation formula, a probability value of each feature set matrix belonging to each preset category can be obtained, the probability value represents the possibility of each feature set matrix belonging to each preset category, and the probability value is higher. And for any one feature set matrix, obtaining the highest probability value from the obtained probability values, and taking the preset category corresponding to the highest probability value as the category of the feature set data corresponding to the feature set matrix.
In this embodiment, the feature set data is divided into two types, which are: when two vehicles are accident vehicles, the characteristic set data formed by the characteristic data of the two vehicles, and when the two vehicles are not accident vehicles at all or not accident vehicles at all, the characteristic set data formed by the characteristic data of the two vehicles, correspondingly, the preset categories are two, and the preset categories are respectively: both vehicles are accident vehicles and both vehicles are not either or not accident vehicles. Then, through the above processing procedure, probability values that each feature set matrix belongs to the above two categories respectively can be obtained, and then, the larger of the two probability values can be obtained, and the category corresponding to the larger probability value is the category of the feature set data, specifically: both vehicles are accident vehicles, or both vehicles are not both or not accident vehicles.
And acquiring feature set data belonging to a preset target category in the category of each feature set data, wherein the preset target category is that two vehicles are accident vehicles, namely acquiring the feature set data belonging to the accident vehicles in the category of each feature set data, and acquiring the target feature set data. Accordingly, two vehicles corresponding to the target feature set data are determined as initial accident vehicles.
And 4, step 4: the method comprises the following steps of comparing the consistency of the characteristics of initial accident vehicles in each traffic accident image, and if the consistency comparison condition is met, acquiring a target area of each traffic accident image, wherein the target area is related to the area occupied by each initial accident vehicle in the traffic accident image:
in order to avoid the identification error of the initial accident vehicle in a single traffic accident image, after the initial accident vehicle in each traffic accident image is obtained, the characteristics of the initial accident vehicle in each traffic accident image are compared in consistency, and only if the consistency comparison condition is met, the two initial accident vehicles which are obtained through identification are determined to be real accident vehicles.
In this embodiment, the feature of the initial accident vehicle in each traffic accident image is a color feature, i.e., the color of the vehicle. Then, the comparing process of performing consistency comparison on the characteristics of the initial accident vehicle in each traffic accident image includes: and identifying the color of each initial accident vehicle in each traffic accident image to generate a color set corresponding to each traffic accident image, wherein the color set comprises the colors of two initial accident vehicles. And comparing whether the corresponding color sets in the traffic accident images are consistent or not, and if so, indicating that the consistency comparison condition is met. The identification algorithm of the vehicle color can adopt the existing color identification algorithm, and is not described in detail.
And if the consistency comparison condition is met, acquiring a target area of each traffic accident image, wherein the target area is related to the area occupied by each initial accident vehicle in the traffic accident image. Since two initially incident vehicles must be adjacent, the areas occupied by the two initially incident vehicles must intersect or have overlapping areas. In this embodiment, a union region of regions occupied by the two initial accident vehicles in the corresponding traffic accident image is used as a target region, and the target region is the region occupied by the two initial accident vehicles. Then, each traffic accident image corresponds to one target area.
As another embodiment, the initial accident vehicle in each traffic accident image may be further characterized by license plate number information, and the license plate numbers of the initial accident vehicles in the traffic accident image are compared in a consistency manner to obtain a consistency comparison result.
And 5: carrying out ground lane line identification on the target area of each traffic accident image to obtain a target ground lane line in each traffic accident image:
in this embodiment, the traffic accident responsibility division needs to be determined by combining the ground lane lines and the positions of the two initial accident vehicles, so that the ground lane lines need to be identified for the target area of each traffic accident image to obtain the target ground lane lines in each traffic accident image. As a specific embodiment, a specific implementation of this step is given below:
(1) and acquiring a target area image of a target area of the traffic accident image. It should be understood that the target area image contains ground lane lines in addition to the two initial accident vehicles. In order to facilitate subsequent image processing, a feature image of the target area image can be acquired through a preset image feature acquisition algorithm.
(2) And identifying the target object which is different from the background of the target area image in the target area image. Since the color of various indication lines (such as lane lines, zebra stripes, arrow indication lines, etc.) in a road is white or yellow and the color of a road body is dark gray, the color of various indication lines in the road is greatly different from the color of the road body, that is, the difference in pixel values is large. Then, by using various indication lines in the road as the target objects and using the road body as the image background, the target objects in the target area image that are different from the background of the target area image can be identified. As a specific implementation manner, the target object is located in two pixel value ranges, which correspond to white and yellow, respectively, so that the pixel values of the pixel points of the target area image are obtained, and the target object, that is, the indication line in the road, can be identified and obtained by comparing the pixel values with the two pixel value ranges. It should be understood that the number of target objects in the target area image may be only one or more than one.
The identification accuracy can be improved in the identification process of the target object, and as other implementation modes, other existing target identification algorithms can be adopted to identify various indication lines in the road, namely, the target object which is different from the background of the target area image in the target area image is identified.
(3) And determining the expression of each target object according to the relative position of the target object in the target area image.
As a specific embodiment, a two-dimensional coordinate system is constructed by using the target area image, specifically, a pixel point at the leftmost lower part of the target area image may be used as an origin of the two-dimensional coordinate system, and a straight line where the length of the target area image is located and a straight line where the width of the target area image is located are respectively used as an X axis and a Y axis of the origin of the two-dimensional coordinate system. Then, the coordinates of each pixel point in the target area image, and thus the coordinates of each pixel point of the target object, can be determined. And then, fitting the expression of each target object, namely fitting the linear equation of each target object to obtain the expression of each target object. Wherein, the fitting can be performed by using an existing fitting algorithm, such as RANSAC curve fitting algorithm. For any target object, in the target area image, the pixel points of the target object are located on two sides of the straight line corresponding to the corresponding expression.
(4) Inputting the expression of each target object into a preset ground lane line identification database, and acquiring the expression corresponding to the ground lane line to obtain the target ground lane line, wherein the ground lane line identification database comprises at least one expression corresponding to the ground lane line.
Since the indication lines include various types, such as ground lane lines, zebra crossings, arrow indication lines, and the like, it is necessary to screen the target ground lane lines from the target object, that is, to screen the target ground lane lines from the obtained indication lines.
The shapes of different indication lines are different, namely the shapes of the ground lane line, the zebra crossing and the arrow indication line are different, correspondingly, the expression sets corresponding to different types of target objects are different, and the expression sets comprise at least one expression corresponding to the corresponding target object. It should be appreciated that since the shapes of the different types of indicator lines are known, then the respective expressions in the corresponding expression sets for the various types of indicator lines are also known. Therefore, a ground lane line identification database can be constructed according to various types of indication lines and expressions corresponding to the various types of indication lines, the ground lane line identification database comprises at least one expression corresponding to the ground lane line, and the number of the expressions is determined according to the actual condition of the ground lane line. It should be understood that the ground lane line identification database encompasses all of the ground lane line expressions that may be presently acquired. Of course, the ground lane line identification database may further include at least one expression corresponding to other types of indication lines. Then, the expressions of the target objects are input into the ground lane line identification database, and then the expressions corresponding to the ground lane lines in the target objects can be obtained, so that the target ground lane lines are obtained.
The accuracy of ground lane line identification can be improved in the ground lane line identification process. As another embodiment, the existing ground lane line recognition algorithm may be used to recognize the ground lane line in the target area of each traffic accident image.
Step 6: determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image:
for any one traffic accident image, according to two initial accident vehicles in the traffic accident image and the acquired target ground lane line, that is, according to the position relationship between the two initial accident vehicles and the target ground lane line, the traffic accident responsibility of each initial accident vehicle can be determined, for example: and (3) setting the traffic accident responsibility of the initial accident vehicle with the line being pressed or overthread as 'full responsibility', and setting the traffic accident responsibility of the initial accident vehicle without the line being pressed as 'no responsibility'.
As a specific embodiment, a specific implementation of the traffic accident liability assessment is given below:
according to the relative positions of two initial accident vehicles and a target ground lane line in a traffic accident image, the corresponding line passing areas and the relative angles of the two initial accident vehicles are calculated, wherein the line passing areas are the areas of the initial accident vehicles exceeding the target ground lane line in the known advancing direction, and the relative angles are the included angles of the central axis of the initial accident vehicle and the target ground lane line in the known advancing direction. It should be understood that the known direction of travel is known, i.e., the known direction of travel is known in the traffic accident image. Since the area occupied by each initial accident vehicle in the traffic accident image is known, the area of each of the two initial accident vehicles exceeding the target ground lane line in the known traveling direction can be calculated. The central axis of the initial accident vehicle can be obtained by processing the image of the initial accident vehicle, or can be obtained by fitting through a RANSAC curve fitting algorithm, a linear equation of the initial accident vehicle is obtained, a corresponding straight line in the traffic accident image where the linear equation is located is used as the central axis of the corresponding initial accident vehicle, and then, an included angle between the central axis of the initial accident vehicle and a target ground lane line in the known advancing direction is calculated, so that a relative angle is obtained.
The responsibility category database is preset and comprises corresponding relations of the line passing area interval, the relative angle interval and the traffic accident responsibility category, namely a plurality of corresponding relations, and each corresponding relation is the corresponding relation of the line passing area interval, the relative angle interval and the traffic accident responsibility category. Then, the obtained line passing areas and relative angles corresponding to the two initial accident vehicles are input into a preset responsibility category database, and the traffic accident responsibility categories corresponding to the line passing areas and the relative angles corresponding to the two initial accident vehicles in each traffic accident image are determined.
It should be understood that the traffic accident responsibility category contained in the responsibility category database is more detailed in setting, such as "principal", "secondary responsibility", and the like, in addition to "all responsibility" and "no responsibility". Then, the responsibility category database includes the corresponding line-passing area interval and the corresponding angle interval corresponding to "no responsibility", the corresponding line-passing area interval and the corresponding angle interval corresponding to "secondary responsibility", the corresponding line-passing area interval and the corresponding angle interval corresponding to "primary responsibility", and the corresponding line-passing area interval and the corresponding angle interval corresponding to "total responsibility".
And 7: obtaining license plate numbers of all initial accident vehicles:
and after the traffic accident responsibility category of each initial accident vehicle is obtained, license plate numbers of each initial accident vehicle are obtained by carrying out license plate number identification on the traffic accident image. It should be understood that currently known license plate number identification algorithms are employed for license plate number identification.
And 8: integrating the license plate number of each initial accident vehicle, the identity information and the confirmed traffic accident responsibility to obtain a traffic accident confirmation data packet:
and (3) integrating the license plate number of each initial accident vehicle obtained in the step (7), the identity information obtained in the step (1) and the traffic accident responsibility obtained in the step (6), for example, performing data compression to obtain a traffic accident confirmation data packet. It should be understood that the traffic accident recognition data packet includes: license plate numbers of two initial accident vehicles, identity information of starting personnel for starting traffic accident responsibility confirmation, and traffic accident responsibility of the two initial accident vehicles.
After the traffic accident identification data packet is obtained, it may be uploaded to a traffic police system or other server, or stored locally.
In this embodiment, after step 8, the traffic accident handling method further includes the following steps:
and step 9: the method comprises the following steps of detecting the smart phone with the opened Bluetooth in a preset range, and carrying out Bluetooth pairing:
and detecting the smart phone of which the Bluetooth is started in a preset range of a hardware execution main body of the traffic accident processing method, and carrying out Bluetooth pairing with the smart phone. It should be understood that the smart phone may be a mobile phone of other people related to the traffic accident, and may also be a mobile phone of a traffic police.
Step 10: after the Bluetooth pairing is completed, the traffic accident determination data packet is sent to the smart phone:
after the Bluetooth pairing is completed, Bluetooth connection with the smart phone can be established, and then the obtained traffic accident determination data packet is sent to the smart phone.
The present embodiment further provides a traffic accident handling system based on artificial intelligence and computer vision, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the traffic accident handling method based on artificial intelligence and computer vision provided in this embodiment when executing the computer program, and since the traffic accident handling method based on artificial intelligence and computer vision has been described in detail in the above embodiments, details are not repeated.

Claims (8)

1. A traffic accident handling method based on artificial intelligence and computer vision is characterized by comprising the following steps:
acquiring identity information of a starting person who starts traffic accident processing;
after the identity information is acquired, at least two traffic accident images are acquired, wherein the traffic accident images comprise at least two vehicles;
processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image;
carrying out consistency comparison on the characteristics of the initial accident vehicles in each traffic accident image, and if the consistency comparison condition is met, acquiring a target area of each traffic accident image, wherein the target area is related to the area occupied by each initial accident vehicle in the traffic accident image;
carrying out ground lane line identification on the target area of each traffic accident image to obtain a target ground lane line in each traffic accident image;
determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image;
acquiring license plate numbers of all initial accident vehicles;
and integrating the license plate number of each initial accident vehicle, the identity information and the confirmed traffic accident responsibility to obtain a traffic accident confirmation data packet.
2. The traffic accident handling method based on artificial intelligence and computer vision according to claim 1, wherein the processing of the traffic accident image to obtain an initial accident vehicle in the traffic accident image is specifically:
acquiring characteristic data of each vehicle, wherein the characteristic data comprises image data of an area occupied by the corresponding vehicle in the corresponding traffic accident image;
respectively combining the feature data of every two vehicles in each vehicle contained in the same traffic accident image to obtain a plurality of feature set data;
classifying the feature set data to obtain target feature set data, and determining two vehicles corresponding to the target feature set data as initial accident vehicles, wherein the target feature set data are feature set data belonging to preset target categories in the feature set data.
3. The method of claim 2, wherein the classifying the feature set data to obtain target feature set data comprises:
converting each feature set data into a feature set matrix;
passing each characteristic set matrix through a preset convolutional neural network to obtain each full-connection layer matrix;
calculating probability values of all feature set matrixes belonging to all preset categories based on the full-connection layer matrix and the preset parameter matrix;
for any one feature set matrix, obtaining the highest probability value, and taking the preset category corresponding to the highest probability value as the category of the feature set data corresponding to the feature set matrix;
and acquiring feature set data belonging to a preset target category in the categories of the feature set data to obtain the target feature set data.
4. The artificial intelligence and computer vision based traffic accident processing method of claim 1, wherein the characteristic of the initial accident vehicle in each traffic accident image is a color characteristic;
the consistency comparison of the characteristics of the initial accident vehicles in the traffic accident images comprises the following steps:
and identifying the color of each initial accident vehicle in each traffic accident image, generating a color set corresponding to each traffic accident image, comparing whether the corresponding color sets in each traffic accident image are consistent or not, and if so, indicating that a consistency comparison condition is met.
5. The method for traffic accident handling based on artificial intelligence and computer vision according to claim 1, wherein the performing ground lane line recognition on the target area of each traffic accident image to obtain the target ground lane line in each traffic accident image comprises:
acquiring a target area image of a target area of a traffic accident image;
identifying a target object in the target area image, which is different from the background of the target area image;
determining an expression of each target object according to the relative position of the target object in the target area image;
inputting the expression of each target object into a preset ground lane line identification database, and acquiring the expression corresponding to the ground lane line to obtain the target ground lane line; wherein the ground lane line identification database includes at least one expression corresponding to a ground lane line.
6. The artificial intelligence and computer vision based traffic accident handling method of claim 1, wherein the determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane lines in each traffic accident image comprises:
calculating the corresponding line passing areas and the corresponding angles of the two initial accident vehicles according to the relative positions of the two initial accident vehicles and the target ground lane line in the traffic accident image, wherein the line passing areas are the areas of the initial accident vehicles exceeding the target ground lane line in the known advancing direction, and the corresponding angles are the included angles of the central axis of the initial accident vehicle and the target ground lane line in the known advancing direction;
determining the corresponding line passing areas and the corresponding traffic accident responsibility classes of the relative angles of the two initial accident vehicles in each traffic accident image according to a preset responsibility class database; the responsibility category database comprises corresponding relations of the line passing area interval, the relative angle interval and the traffic accident responsibility category.
7. The artificial intelligence and computer vision based traffic accident handling method according to claim 1, wherein after the data integration of the license plate number of each initial accident vehicle, the identity information and the confirmed traffic accident responsibility to obtain a traffic accident confirmation data packet, the traffic accident handling method further comprises the following steps:
detecting smart phones with started Bluetooth in a preset range, and performing Bluetooth pairing;
and after the Bluetooth pairing is completed, the traffic accident determination data packet is sent to the smart phone.
8. An artificial intelligence and computer vision based traffic accident handling system comprising a memory and a processor, and a computer program stored on the memory and run on the processor, characterized in that the processor, when executing the computer program, implements the artificial intelligence and computer vision based traffic accident handling method according to any of claims 1-7.
CN202110735377.1A 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision Pending CN113538193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110735377.1A CN113538193A (en) 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110735377.1A CN113538193A (en) 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision

Publications (1)

Publication Number Publication Date
CN113538193A true CN113538193A (en) 2021-10-22

Family

ID=78097345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110735377.1A Pending CN113538193A (en) 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision

Country Status (1)

Country Link
CN (1) CN113538193A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116189114A (en) * 2023-04-21 2023-05-30 西华大学 Method and device for identifying collision trace of vehicle

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04297823A (en) * 1991-03-13 1992-10-21 Mitsubishi Electric Corp Driving state memory device of vehicle
KR20070028863A (en) * 2005-09-08 2007-03-13 주식회사 피엘케이 테크놀로지 A device for recording the accident information of a vehicle
CN104463935A (en) * 2014-11-11 2015-03-25 中国电子科技集团公司第二十九研究所 Lane rebuilding method and system used for traffic accident restoring
KR101599628B1 (en) * 2014-08-29 2016-03-04 정유철 Reporting System For Traffic Accident Image
CN106021548A (en) * 2016-05-27 2016-10-12 大连楼兰科技股份有限公司 Remote damage assessment method and system based on distributed artificial intelligent image recognition
CN106157386A (en) * 2015-04-23 2016-11-23 中国电信股份有限公司 Vehicular video filming control method and device
CN108154696A (en) * 2017-12-25 2018-06-12 重庆冀繁科技发展有限公司 Car accident manages system and method
CN109671006A (en) * 2018-11-22 2019-04-23 斑马网络技术有限公司 Traffic accident treatment method, apparatus and storage medium
CN109743673A (en) * 2018-12-17 2019-05-10 江苏云巅电子科技有限公司 Parking lot traffic accident traceability system and method based on high-precision indoor positioning technologies
US20190251395A1 (en) * 2018-02-13 2019-08-15 Alibaba Group Holding Limited Vehicle accident image processing method and apparatus
CN110135418A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Traffic accident fix duty method, apparatus, equipment and storage medium based on picture
CN110942623A (en) * 2018-09-21 2020-03-31 阿里巴巴集团控股有限公司 Auxiliary traffic accident handling method and system
CN111046212A (en) * 2019-12-04 2020-04-21 支付宝(杭州)信息技术有限公司 Traffic accident processing method and device and electronic equipment
CN111444808A (en) * 2020-03-20 2020-07-24 平安国际智慧城市科技股份有限公司 Image-based accident liability assignment method and device, computer equipment and storage medium
CN111681336A (en) * 2020-05-14 2020-09-18 李娜 Traffic accident traceability system
CN212220190U (en) * 2020-05-14 2020-12-25 李娜 Traffic accident traceability system
CN112487498A (en) * 2020-12-16 2021-03-12 京东数科海益信息科技有限公司 Traffic accident handling method, device and equipment based on block chain and storage medium
CN112784724A (en) * 2021-01-14 2021-05-11 上海眼控科技股份有限公司 Vehicle lane change detection method, device, equipment and storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04297823A (en) * 1991-03-13 1992-10-21 Mitsubishi Electric Corp Driving state memory device of vehicle
KR20070028863A (en) * 2005-09-08 2007-03-13 주식회사 피엘케이 테크놀로지 A device for recording the accident information of a vehicle
KR101599628B1 (en) * 2014-08-29 2016-03-04 정유철 Reporting System For Traffic Accident Image
CN104463935A (en) * 2014-11-11 2015-03-25 中国电子科技集团公司第二十九研究所 Lane rebuilding method and system used for traffic accident restoring
CN106157386A (en) * 2015-04-23 2016-11-23 中国电信股份有限公司 Vehicular video filming control method and device
CN106021548A (en) * 2016-05-27 2016-10-12 大连楼兰科技股份有限公司 Remote damage assessment method and system based on distributed artificial intelligent image recognition
CN108154696A (en) * 2017-12-25 2018-06-12 重庆冀繁科技发展有限公司 Car accident manages system and method
US20190251395A1 (en) * 2018-02-13 2019-08-15 Alibaba Group Holding Limited Vehicle accident image processing method and apparatus
CN110942623A (en) * 2018-09-21 2020-03-31 阿里巴巴集团控股有限公司 Auxiliary traffic accident handling method and system
CN109671006A (en) * 2018-11-22 2019-04-23 斑马网络技术有限公司 Traffic accident treatment method, apparatus and storage medium
CN109743673A (en) * 2018-12-17 2019-05-10 江苏云巅电子科技有限公司 Parking lot traffic accident traceability system and method based on high-precision indoor positioning technologies
CN110135418A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Traffic accident fix duty method, apparatus, equipment and storage medium based on picture
CN111046212A (en) * 2019-12-04 2020-04-21 支付宝(杭州)信息技术有限公司 Traffic accident processing method and device and electronic equipment
CN111444808A (en) * 2020-03-20 2020-07-24 平安国际智慧城市科技股份有限公司 Image-based accident liability assignment method and device, computer equipment and storage medium
CN111681336A (en) * 2020-05-14 2020-09-18 李娜 Traffic accident traceability system
CN212220190U (en) * 2020-05-14 2020-12-25 李娜 Traffic accident traceability system
CN112487498A (en) * 2020-12-16 2021-03-12 京东数科海益信息科技有限公司 Traffic accident handling method, device and equipment based on block chain and storage medium
CN112784724A (en) * 2021-01-14 2021-05-11 上海眼控科技股份有限公司 Vehicle lane change detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116189114A (en) * 2023-04-21 2023-05-30 西华大学 Method and device for identifying collision trace of vehicle

Similar Documents

Publication Publication Date Title
CN109949578B (en) Vehicle line pressing violation automatic auditing method based on deep learning
KR102138082B1 (en) Method, system, device and readable storage medium to realize insurance claim fraud prevention based on multiple image consistency
WO2019169816A1 (en) Deep neural network for fine recognition of vehicle attributes, and training method thereof
TWI384408B (en) Method and system for identifying image and outputting identification result
CN106599792B (en) Method for detecting hand driving violation behavior
CN112633144A (en) Face occlusion detection method, system, device and storage medium
KR101834778B1 (en) Apparatus for recognizing traffic sign and method thereof
CN108268867B (en) License plate positioning method and device
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
JP2018198053A (en) Information processor, information processing method, and program
CN103902970B (en) Automatic fingerprint Attitude estimation method and system
CN111401188B (en) Traffic police gesture recognition method based on human body key point characteristics
CN108764096B (en) Pedestrian re-identification system and method
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN108304749A (en) The recognition methods of road speed line, device and vehicle
CN111767879A (en) Living body detection method
CN110543848A (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN114677754A (en) Behavior recognition method and device, electronic equipment and computer readable storage medium
CN113538193A (en) Traffic accident handling method and system based on artificial intelligence and computer vision
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN110660187B (en) Forest fire alarm monitoring system based on edge calculation
CN112101260A (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN112241695A (en) Method for recognizing portrait without safety helmet and with face recognition function
CN101882219A (en) Image identification and output method and system thereof
Amin et al. An automatic number plate recognition of Bangladeshi vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination