CN116342651A - Intelligent driving safety detection system and method based on artificial intelligence - Google Patents

Intelligent driving safety detection system and method based on artificial intelligence Download PDF

Info

Publication number
CN116342651A
CN116342651A CN202310264929.4A CN202310264929A CN116342651A CN 116342651 A CN116342651 A CN 116342651A CN 202310264929 A CN202310264929 A CN 202310264929A CN 116342651 A CN116342651 A CN 116342651A
Authority
CN
China
Prior art keywords
vehicle
track
image
characteristic point
vehicle information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310264929.4A
Other languages
Chinese (zh)
Inventor
朱弋平
张庆丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Shenyun Technology Development Co ltd
Original Assignee
Wuxi Shenyun Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Shenyun Technology Development Co ltd filed Critical Wuxi Shenyun Technology Development Co ltd
Priority to CN202310264929.4A priority Critical patent/CN116342651A/en
Publication of CN116342651A publication Critical patent/CN116342651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of image detection, in particular to an intelligent driving safety detection system and method based on artificial intelligence, comprising the following steps: s100: collecting vehicle information and corresponding vehicle track information of the violations, collecting the vehicle information and collecting image data of the vehicle in a certain time period acquired by using a wide-angle camera to form an image data set; s200: encrypting and storing the acquired data by using a digital signature algorithm; s300: analyzing the change rule of each image characteristic point under different time from each image in the image data set, and judging whether the state of each image characteristic point is static or dynamic; s400: matching the dynamic characteristic data with the historical track set, and further analyzing the risk degree of the concerned vehicle in the current state; s500: reminding the situation that the dangerous degree of the concerned vehicle exceeds a threshold value; the offence of people in traveling is reduced, and the safety condition of people in the traveling process is timely detected.

Description

Intelligent driving safety detection system and method based on artificial intelligence
Technical Field
The invention relates to the technical field of image detection, in particular to an intelligent driving safety detection system and method based on artificial intelligence.
Background
The transportation means is an indispensable part of the modern society life, and refers to all artificial devices used for human riding instead of walking or transportation in a narrow sense; in the era of paying attention to efficiency and time as money, convenience in transportation brings great benefits to human beings. Not only changes the travel, life style, life concept and life quality of people, but also changes the social relationship, communication mode, activity rhythm, knowledge structure and cultural custom of people.
With the advancement of technology and the development of economy, most families can purchase travel tools, however, with the increase of vehicles, a plurality of defects are also frequent: when people use the travel vehicles, the travel safety problem is frequently ignored, the violation phenomenon is also the family meal, so that the vehicle collision phenomenon is frequently generated, and a certain property loss and even life safety problem are caused, therefore, how to solve the problem that the people travel to generate the violation behaviors and detect the safety condition in the travel process of the people becomes the current main research problem.
Therefore, there is a need for an intelligent detection system and method for driving safety based on artificial intelligence to solve the above problems.
Disclosure of Invention
The invention aims to provide an intelligent driving safety detection system and method based on artificial intelligence, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: an intelligent detection method for driving safety based on artificial intelligence comprises the following steps:
s100: collecting all vehicle information and corresponding vehicle track information of violations in a big data network to form a vehicle information set and a history track set, collecting vehicle information logged in an intelligent detection system in a current state, setting the vehicle as a concerned vehicle, and collecting image data of the concerned vehicle in a certain time period acquired by using a wide-angle camera to form an image data set;
s200: acquiring a vehicle information set and a historical track set, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm;
s300: extracting a stored image data set, analyzing the change rule of each image characteristic point at different time from each image in the image data set, and further judging whether the state of each image characteristic point is static or dynamic;
S400: fitting the feature point tracks confirmed to be dynamic feature points, constructing a two-dimensional plane coordinate system, extracting feature points aggregated towards a y axis, matching dynamic feature data with the historical track set according to the relation between the concerned vehicle and the vehicle information set, analyzing whether vehicle information belongs to the vehicle information set, prompting data with higher risk under rough calculation if the vehicle information belongs to the vehicle information set, and not matching; analyzing the dangers of all dynamic feature points by using the matched results, and further analyzing the dangers of the concerned vehicles in the current state; firstly, autonomous screening is carried out, and screening with low risk degree under rough calculation is carried out, for example, dynamic feature points are far away from a concerned vehicle; then, data matching is carried out on the data with higher risk, and the risk degree is further accurately analyzed;
s500: and reminding the situation that the dangerous degree of the concerned vehicle exceeds a threshold value.
Further, the step S100 includes:
s110: collecting the information of each vehicle with violations and corresponding track information of the vehicle in a big data network to form a vehicle information set A= { a1, a2, …, an } and a history track set B= { B1, B2, …, bn }, wherein a1, a2, …, an represents information of the 1 st, 2, … th and n th vehicles, and B1, B2, …, bn represents all history track information of the 1 st, 2 nd, … th and n th vehicles; the vehicle information of the violation comprises vehicle information of an accident which does not occur in the violation and vehicle information of the violation and the accident which occurs; the vehicle information set comprises vehicle types, speed standards and the like; the set of historical trajectories includes vehicle speed as each trajectory is traveled, and safety conditions.
S120: the intelligent detection system recognizes that a certain concerned vehicle information av is registered, and then the wide-angle camera of the concerned vehicle av is utilized to collect image data shot under a certain time sequence to form an image data set: c= { C1, C2, …, cm }, where C1, C2, …, cm represent image data taken at the 1 st, 2 nd, … th, m th time points.
Further, the step S200 includes:
acquiring a vehicle information set and a historical track set, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm; the digital signature algorithm belongs to a conventional technical means of a person skilled in the art, so that redundant description is not made in the present application.
Further, the step S300 includes:
s310: extracting a stored image data set C of the vehicle of interest aν, and characterizing m images in the image data set C by using an LBP (location based protocol) characteristic algorithm; obtaining m characterized image feature values to form a pixel point feature set D= { D1, D2, …, dh }, wherein D1, D2, …, dh represents 1 st, 2 nd … th pixel feature values in m images; based on any pixel point di in the pixel characteristic set D, a pixel consistency formula is obtained: alpha = 1- +|! (d (i+k) -di), wherein k=1, 2, …, h-i, if α=0, indicates that the pixel eigenvalues are consistent, they are classified into one class and are identified with the same color, otherwise, if α=1, they indicate that the pixel eigenvalues are different and are identified with different colors; the LBP feature algorithm belongs to a conventional technical means of a person skilled in the art, so that excessive details are not made in the present application;
S320: constructing a two-dimensional plane coordinate system for each piece of image characteristic data subjected to color marking, and respectively forming m image coordinate sets: l (j) = { (x 1, y 1), (x 2, y 2), …, (xz, yz) }, where j=1, 2, …, m,
(x 1, y 1), (x 2, y 2), …, (xz, yz) represents the 1,2, …, z pixel coordinates of any jth image; traversing m image coordinate sets L (j), and judging the distance between any two feature points (xgamma, ygamma) with the same color (xdelta, ydelta): if- 2 +(yγ-yδ) 2 ]<Beta; wherein, beta represents a distance threshold value, the characteristic points with the same color are fused and displayed as new characteristic points with the same color by the center of the area;
s330: extracting image feature data of an mth time point based on m image coordinate sets fused by feature points, and eliminating feature points which are not marked by the same color in other previous m-1 images aiming at the color marks of the image feature points, thereby being beneficial to reducing the calculation of a subsequent system and improving the calculation efficiency; further sequentially superposing the image data by utilizing a superposition algorithm to finally form a new image coordinate set Ls; the coincidence algorithm belongs to a conventional technical means of a person skilled in the art, so that excessive redundant description is not made in the application;
S340: in the process of acquiring image superposition, fitting the characteristic point coordinates of the same color mark in different original images by using a fitting algorithm to respectively form U characteristic point track sets: ls= {11,12, …, lU }, where l1,12, …,1U represents the 1 st, 2 nd, … th, U-th feature point track; based on any one characteristic point track Lp in the characteristic point track set Ls, obtaining the similarity between the Lp and other characteristic point tracks: λp= |lp n lp+e|/|lp++e|, e=1, 2, …, U-p; the fitting algorithm can calculate a track route and prepare for dynamic and static analysis of a subsequent object;
s350: analyzing the track similarity, and judging whether the state of each image characteristic point is static or dynamic: if the similarity lambdap is not larger than eta, wherein eta represents a similarity threshold value, and the characteristic point track Lp and other characteristic point tracks are dissimilar, and the characteristic point track has regularity, the state of the characteristic point is judged to be dynamic, and the characteristic point is marked as a dynamic characteristic point; otherwise, if the similarity lambdap > eta exists, the characteristic point track Lp is similar to other characteristic point tracks, the state of the characteristic point is judged to be static, and the characteristic point is marked as a static characteristic point.
Further, the step S400 includes:
s410: constructing a two-dimensional plane coordinate system by taking the center of the superimposed image as an origin, and extracting the data confirmed as dynamic characteristic points in the image to obtain u characteristic point tracks; judging whether the directions of all the tracks are aggregated towards the y axis based on the u characteristic point tracks after fitting, and if the characteristic point tracks aggregated towards the y axis do not exist, confirming that the current dangerous degree of the concerned vehicle is 0; otherwise, if there is a feature point track converging toward the y-axis, the process proceeds to step S420 in order to further analyze the risk level of the concerned vehicle;
s420: acquiring characteristic point tracks aggregated on a counter y axis to obtain s characteristic point tracks, respectively confirming corresponding vehicle information of each characteristic point track, setting the corresponding vehicle information as a target vehicle, and confirming the running speed of each target vehicle by utilizing a track length/time sequence m;
s430: analyzing the relationship between the concerned vehicle aν and all the vehicle information sets A with violations in the big data network: if av epsilon A shows that the violation event exists in the concerned vehicle, confirming that the current dangerous degree of the concerned vehicle av is s/u;
s440: if it is
Figure BDA0004132751570000041
And if the violation event does not occur in the concerned vehicle, acquiring a feature point track lg and corresponding target vehicle information which are aggregated in the y axis by any one of the s feature point tracks, extracting a vehicle information set A= { a1, a2, …, an } and a history track set B= { B1, B2, … and bn } which are subjected to the violation occurrence in the big data network, matching the feature point track 1g with the history track set B, and analyzing the risk of the target vehicle corresponding to each feature point track by utilizing the matched result to further analyze the current risk degree of the concerned vehicle.
Further, the step S440 includes:
s441: extracting any characteristic point track lg aggregated towards the y axis and a history track set B in a big data network, and comparing the similarity between the two tracks based on any history track bf in the history track set B, wherein bf epsilon B: ρg= |1g ∈bf|/|1g ∈bf| traversing the history track set B, when the similarity ρg > δ, wherein δ is a data similarity threshold value, representing that the characteristic point track lg is similar to the history track bf, and recording that the number of similarity to the history track bf in the history track set B is εg; based on the similar number epsilon g of any one characteristic point track lg and history tracks, extracting corresponding target vehicle information, and setting as ag; if epsilon g/n is larger than phi, wherein phi represents a quantity proportion threshold value, and represents that the number of times of the violation event of the target vehicle corresponding to the characteristic point track 1g in the big data network is large, and the risk is high; otherwise, if epsilon g/n is less than phi, the number of times of the violation event of the target vehicle corresponding to the characteristic point track 1g in the big data network is less, and the risk is low;
s442: screening epsilon g similar historical tracks according to the vehicle information set A based on any target vehicle ag: corresponding vehicle information sets Ag= { a1, a2, …, aεg } in the epsilon g historical tracks are obtained, ag is traversed, and the similarity mug of the target vehicle Ag and any one vehicle information aq in the corresponding vehicle information sets Ag is compared: mug= |ag ∈aq/|ag ∈ag, screening vehicle information with similarity mug > sigma, and recording the number of similar violation vehicles as ωg, wherein ωg is less than or equal to εg; at this time, the danger of the current target vehicle is omega g/n;
S443: traversing the target vehicles corresponding to the S feature point tracks aggregated towards the y axis based on the dangerous condition omega g/n of any target vehicle ag, and obtaining a dangerous set by analyzing the dangerous condition in the same steps S441 and S442: { ω1/n, ω2/n, …, ωs/n }, wherein ω1/n, ω2/n, …, ωs/n represents the risk value of the 1 st, 2 nd, … th, s-th target vehicle;
s444: extracting risk values of the 1 st, 2 nd, … th and s th target vehicles respectively, and confirming that the risk degree of the concerned vehicle aν in the current state is (s/u) [ Sigma ] s g=1 ωg/(s*n)]The method comprises the steps of carrying out a first treatment on the surface of the If the degree of danger is high>(s/u) And/8, representing that the safety of the concerned vehicle is threatened.
Further, the step S500 includes:
when the degree of risk of the concerned vehicle > (s/u)/8 is analyzed, reminding is carried out, and the target vehicle information with highest risk is displayed by utilizing intelligent voice.
Driving safety intelligent detection system, the system includes: the system comprises a data acquisition module, a database, an image analysis module, an autonomous learning module and an intelligent reminding module;
collecting all vehicle information with violations and corresponding vehicle track information in a big data network through the data collecting module to form a vehicle information set and a history track set, collecting vehicle information logged in an intelligent detection system in a current state, setting the vehicle as a concerned vehicle, and collecting image data of the concerned vehicle in a certain time period obtained by using a wide-angle camera to form an image data set;
Acquiring a vehicle information set and a historical track set through the database, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm;
extracting a stored image data set through the image analysis module, analyzing the change rule of each image characteristic point at different time from each image in the image data set, and further judging whether the state of each image characteristic point is static or dynamic;
fitting the feature point tracks confirmed to be dynamic feature points through the autonomous learning module, constructing a two-dimensional plane coordinate system, extracting feature points aggregated towards a y axis, matching dynamic feature data with the historical track set according to the relation between the concerned vehicle and the vehicle information set, analyzing the dangerousness of each dynamic feature point by utilizing the matching result, and further analyzing the dangerousness degree of the concerned vehicle in the current state;
and reminding the situation that the dangerous degree of the concerned vehicle is high through the intelligent reminding module.
Further, the data acquisition module comprises a historical vehicle acquisition unit and a current vehicle acquisition unit;
The historical vehicle acquisition unit is used for acquiring vehicle information and vehicle track information of all the login cloud platforms in a historical state; the current vehicle acquisition unit is used for acquiring vehicle information using the detection system in the current state and image data in a certain period of time.
Further, the image analysis module comprises a characteristic point acquisition unit, a characteristic point analysis unit and a state discrimination unit;
the characteristic point acquisition unit is used for extracting characteristic information of each image acquired at different time by utilizing a characteristic extraction algorithm; the characteristic point analysis unit is used for analyzing the change rule of each image characteristic point under different time; the state judging unit is used for judging whether the state of the image characteristic points is static or dynamic according to the image change rule.
Further, the autonomous learning module comprises a dynamic feature extraction unit, a data matching unit and a risk assessment unit;
the dynamic characteristic extraction unit is used for extracting the data confirmed as dynamic characteristic points to obtain dynamic characteristic data; the data matching unit is used for matching the dynamic characteristic data with the vehicle track information of all the illegal vehicles in the big data network to obtain a similar track set; the risk assessment unit is used for carrying out risk assessment on the vehicle information sets in the similar track set combined historical state, and analyzing the risk degree of the concerned vehicle running in the current state.
Compared with the prior art, the invention has the following beneficial effects:
the invention characterizes the image by utilizing the LBP characteristic algorithm, analyzes the consistency of the image characteristic points, and carries out color identification on different image characteristic points by utilizing different colors, thereby being beneficial to the subsequent analysis and distinction of the image overlapping; through constructing a two-dimensional plane coordinate system, carrying out pixel point fusion on the characteristic points with the distance smaller than a threshold value according to a distance formula, removing the characteristic points which do not have the same color mark as the last image, and further carrying out image overlapping on the shot images by utilizing a coincidence algorithm, thereby being beneficial to reducing the calculation of a subsequent system and improving the calculation efficiency; the regularity of the characteristic point tracks formed by the colors is analyzed by utilizing the similarity, the characteristic points which do not have the regularity tracks are judged to be dynamic characteristic points, and the dynamic characteristic points are used as a mode for distinguishing the dynamic and static states of the object, so that the subsequent extraction and analysis of the dynamic characteristic points are facilitated, and the dangers are confirmed; the relation between the concerned vehicle and the vehicle information set is analyzed, the dangerous degree of the concerned vehicle is roughly analyzed, screening is carried out, and the calculation efficiency of the detection system is improved; and matching the dynamic characteristic data with the historical track set, analyzing the dangers of each dynamic characteristic point by using the matching result, further obtaining the dangers of the concerned vehicle in the current state, and facilitating the timely detection and early warning of the travel safety of the vehicle.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a block diagram of an intelligent detection system for traffic safety based on artificial intelligence of the present invention;
FIG. 2 is a flow chart of an intelligent detection method for driving safety based on artificial intelligence.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, the present invention provides the following technical solutions: an intelligent detection method for driving safety based on artificial intelligence comprises the following steps:
s100: collecting all vehicle information and corresponding vehicle track information of violations in a big data network to form a vehicle information set and a history track set, collecting vehicle information logged in an intelligent detection system in a current state, setting the vehicle as a concerned vehicle, and collecting image data of the concerned vehicle in a certain time period acquired by using a wide-angle camera to form an image data set;
The step S100 includes:
s110: collecting the information of each vehicle with violations and corresponding track information of the vehicle in a big data network to form a vehicle information set A= { a1, a2, …, an } and a history track set B= { B1, B2, …, bn }, wherein a1, a2, …, an represents information of the 1 st, 2, … th and n th vehicles, and B1, B2, …, bn represents all history track information of the 1 st, 2 nd, … th and n th vehicles; the vehicle information of the violation comprises vehicle information of an accident which does not occur in the violation and vehicle information of the violation and the accident which occurs; the vehicle information set comprises vehicle types, speed standards and the like; the set of historical trajectories includes vehicle speed as each trajectory is traveled, and safety conditions.
S120: the intelligent detection system recognizes that a certain concerned vehicle information av is registered, and then the wide-angle camera of the concerned vehicle av is utilized to collect image data shot under a certain time sequence to form an image data set: c= { C1, C2, …, cm }, where C1, C2, …, cm represent image data taken at the 1 st, 2 nd, … th, m th time points.
S200: acquiring a vehicle information set and a historical track set, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm;
The step S200 includes:
acquiring a vehicle information set and a historical track set, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm; the digital signature algorithm belongs to a conventional technical means of a person skilled in the art, so that redundant description is not made in the present application.
S300: extracting a stored image data set, analyzing the change rule of each image characteristic point at different time from each image in the image data set, and further judging whether the state of each image characteristic point is static or dynamic;
the step S300 includes:
s310: extracting a stored image data set C of the vehicle of interest aν, and characterizing m images in the image data set C by using an LBP (location based protocol) characteristic algorithm; obtaining m characterized image feature values to form a pixel point feature set D= { D1, D2, …, dh }, wherein D1, D2, …, dh represents 1 st, 2 nd … th pixel feature values in m images; based on any pixel point di in the pixel characteristic set D, a pixel consistency formula is obtained: alpha = 1- +|! (d (i+k) -di), wherein k=1, 2, …, h-i, if α=0, indicates that the pixel eigenvalues are consistent, they are classified into one type, and are identified by the same color, such as "green, blue, red, etc., whereas if α=1, they indicate that the pixel eigenvalues are different, and are identified by different colors; the LBP feature algorithm belongs to a conventional technical means of a person skilled in the art, so that excessive details are not made in the present application;
The LBP characteristic algorithm is utilized to characterize the image, the consistency of the characteristic points of the image is analyzed, and the different colors are utilized to carry out color identification on the characteristic points of different images, so that the subsequent analysis and distinction of the image overlapping are facilitated.
S320: constructing a two-dimensional plane coordinate system for each piece of image characteristic data subjected to color marking, and respectively forming m image coordinate sets: l (j) = { (x 1, y 1), (x 2, y 2), …, (xz, yz) }, where j=1, 2, …, m,
(x 1, y 1), (x 2, y 2), …, (xz, yz) represents the 1,2, …, z pixel coordinates of any jth image; traversing m image coordinate sets L (j), and judging the distance between any two feature points (xgamma, ygamma) with the same color (xdelta, ydelta): if- 2 +(yγ-yδ) 2 ]<Beta; wherein, beta represents a distance threshold value, the characteristic points with the same color are fused and displayed as new characteristic points with the same color by the center of the area;
s330: extracting image feature data of an mth time point based on m image coordinate sets fused by feature points, and eliminating feature points which are not marked by the same color in other previous m-1 images aiming at the color marks of the image feature points, thereby being beneficial to reducing the calculation of a subsequent system and improving the calculation efficiency; further sequentially superposing the image data by utilizing a superposition algorithm to finally form a new image coordinate set Ls; wherein the color identification is selected according to the color library of blue, green, red, pink …; the coincidence algorithm belongs to the conventional technical means of the person skilled in the art, so that excessive redundant description is not made in the application;
And the two-dimensional plane coordinate system is constructed, the characteristic points with the distance smaller than the threshold value are fused according to the distance formula, meanwhile, the characteristic points with the same color mark as the last image are removed, and the shot images are further overlapped by utilizing the coincidence algorithm, so that the calculation of a subsequent system is reduced, and the calculation efficiency is improved.
S340: in the process of acquiring image superposition, fitting the characteristic point coordinates of the same color mark in different original images by using a fitting algorithm to respectively form U characteristic point track sets: ls= {11,12, …,1U }, wherein l1,12, …,1U represents the 1 st, 2 nd, … th, U-th feature point trajectory; based on any one characteristic point track Lp in the characteristic point track set Ls, obtaining the similarity between the Lp and other characteristic point tracks: λp= |lp n lp+e|/|lp++e|, e=1, 2, …, U-p; the fitting algorithm can calculate a track route and prepare for dynamic and static analysis of a subsequent object;
s350: analyzing the track similarity, and judging whether the state of each image characteristic point is static or dynamic: if the similarity lambdap is not larger than eta, wherein eta represents a similarity threshold value, and the characteristic point track Lp and other characteristic point tracks are dissimilar, and the characteristic point track has regularity, the state of the characteristic point is judged to be dynamic, and the characteristic point is marked as a dynamic characteristic point; otherwise, if the similarity lambdap > eta exists, the characteristic point track Lp is similar to other characteristic point tracks, the state of the characteristic point is judged to be static, and the characteristic point is marked as a static characteristic point.
The regularity of the characteristic point tracks formed by the colors is analyzed by utilizing the similarity, the characteristic points which do not have the regularity tracks are judged to be dynamic characteristic points, and the dynamic characteristic points are used as a mode for distinguishing the dynamic and static states of the object, so that the subsequent extraction and analysis of the dynamic characteristic points are facilitated, and the dangers are confirmed.
S400: fitting the feature point tracks confirmed to be dynamic feature points, constructing a two-dimensional plane coordinate system, extracting feature points aggregated towards a y axis, matching dynamic feature data with the historical track set according to the relation between the concerned vehicle and the vehicle information set, analyzing whether vehicle information belongs to the vehicle information set, prompting data with higher risk under rough calculation if the vehicle information belongs to the vehicle information set, and not matching; analyzing the dangers of all dynamic feature points by using the matched results, and further analyzing the dangers of the concerned vehicles in the current state; firstly, autonomous screening is carried out, and screening with low risk degree under rough calculation is carried out, for example, dynamic feature points are far away from a concerned vehicle; then, data matching is carried out on the data with higher risk, and the risk degree is further accurately analyzed;
the step S400 includes:
s410: constructing a two-dimensional plane coordinate system by taking the center of the superimposed image as an origin, and extracting the data confirmed as dynamic characteristic points in the image to obtain u characteristic point tracks; judging whether the directions of all the tracks are aggregated towards the y axis based on the u characteristic point tracks after fitting, and if the characteristic point tracks aggregated towards the y axis do not exist, confirming that the current dangerous degree of the concerned vehicle is 0; otherwise, if there is a feature point track converging toward the y-axis, the process proceeds to step S420 in order to further analyze the risk level of the concerned vehicle;
S420: acquiring characteristic point tracks aggregated on a counter y axis to obtain s characteristic point tracks, respectively confirming corresponding vehicle information of each characteristic point track, setting the corresponding vehicle information as a target vehicle, and confirming the running speed of each target vehicle by utilizing a track length/time sequence m;
s430: analyzing the relationship between the concerned vehicle aν and all the vehicle information sets A with violations in the big data network: if av epsilon A shows that the violation event exists in the concerned vehicle, confirming that the current dangerous degree of the concerned vehicle av is s/u;
s440: if it is
Figure BDA0004132751570000091
And if the violation event does not occur in the concerned vehicle, acquiring a feature point track lg and corresponding target vehicle information which are aggregated in the y axis by any one of the s feature point tracks, extracting a vehicle information set A= { a1, a2, …, an } and a history track set B= { B1, B2, … and bn } which are subjected to the violation occurrence in the big data network, matching the feature point track 1g with the history track set B, and analyzing the risk of the target vehicle corresponding to each feature point track by utilizing the matched result to further analyze the current risk degree of the concerned vehicle.
The relation between the concerned vehicle and the vehicle information set is analyzed, the dangerous degree of the concerned vehicle is roughly analyzed, screening is carried out, and the calculation efficiency of the detection system is improved;
The step S440 includes:
s441: extracting any characteristic point track 1g aggregated towards the y axis and a history track set B in a big data network, and comparing the similarity between the two tracks based on any history track bf in the history track set B, wherein bf epsilon B: ρg= |1g+|bf|/| 1g bf|, traversing the history track set B, when the similarity ρg > delta, wherein delta is a data similarity threshold value, representing that the characteristic point track 1g is similar to the history track bf, and recording that the number of similarity to the history track bf in the history track set B is epsilon g; based on the similar number epsilon g of any one characteristic point track 1g and history tracks, extracting corresponding target vehicle information, and setting as ag; if epsilon g/n is larger than phi, wherein phi represents a quantity proportion threshold value, and represents that the number of times of the violation event of the target vehicle corresponding to the characteristic point track lg in the big data network is large, and the risk is high; otherwise, if epsilon g/n is less than phi, the number of times of the violation event of the target vehicle corresponding to the characteristic point track 1g in the big data network is less, and the risk is low;
s442: screening epsilon g similar historical tracks according to the vehicle information set A based on any target vehicle ag: corresponding vehicle information sets Ag= { a1, a2, …, aεg } in the epsilon g historical tracks are obtained, ag is traversed, and the similarity mug of the target vehicle Ag and any one vehicle information aq in the corresponding vehicle information sets Ag is compared: mug= |ag ∈aq/|ag ∈ag, screening vehicle information with similarity mug > sigma, and recording the number of similar violation vehicles as ωg, wherein ωg is less than or equal to εg; at this time, the danger of the current target vehicle is omega g/n;
S443: traversing the target vehicles corresponding to the S feature point tracks aggregated towards the y axis based on the dangerous condition omega g/n of any target vehicle ag, and obtaining a dangerous set by analyzing the dangerous condition in the same steps S441 and S442: { ω1/n, ω2/n, …, ωs/n }, wherein ω1/n, ω2/n, …, ωs/n represents the risk value of the 1 st, 2 nd, … th, s-th target vehicle;
s444: extracting risk values of the 1 st, 2 nd, … th and s th target vehicles respectively, and confirming that the risk degree of the concerned vehicle aν in the current state is (s/u) [ Sigma ] s g=1 ωg/(s*n)]The method comprises the steps of carrying out a first treatment on the surface of the If the degree of danger is high>(s/u)/8, indicating that the safety of the concerned vehicle is threatened.
S500: reminding the situation that the dangerous degree of the concerned vehicle exceeds a threshold value;
the step S500 includes:
when the degree of risk of the concerned vehicle > (s/u)/8 is analyzed, reminding is carried out, and the target vehicle information with highest risk is displayed by utilizing intelligent voice.
And matching the dynamic characteristic data with the historical track set, analyzing the dangers of each dynamic characteristic point by using the matching result, further obtaining the dangers of the concerned vehicle in the current state, and facilitating the timely detection and early warning of the travel safety of the vehicle.
Driving safety intelligent detection system, the system includes: the system comprises a data acquisition module, a database, an image analysis module, an autonomous learning module and an intelligent reminding module;
collecting all vehicle information with violations and corresponding vehicle track information in a big data network through the data collecting module to form a vehicle information set and a history track set, collecting vehicle information logged in an intelligent detection system in a current state, setting the vehicle as a concerned vehicle, and collecting image data of the concerned vehicle in a certain time period obtained by using a wide-angle camera to form an image data set;
acquiring a vehicle information set and a historical track set through the database, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm;
extracting a stored image data set through the image analysis module, analyzing the change rule of each image characteristic point at different time from each image in the image data set, and further judging whether the state of each image characteristic point is static or dynamic;
fitting the feature point tracks confirmed to be dynamic feature points through the autonomous learning module, constructing a two-dimensional plane coordinate system, extracting feature points aggregated towards a y axis, matching dynamic feature data with the historical track set according to the relation between the concerned vehicle and the vehicle information set, analyzing the dangerousness of each dynamic feature point by utilizing the matching result, and further analyzing the dangerousness degree of the concerned vehicle in the current state;
And reminding the situation that the dangerous degree of the concerned vehicle exceeds a threshold value through the intelligent reminding module.
The data acquisition module comprises a historical vehicle acquisition unit and a current vehicle acquisition unit;
the historical vehicle acquisition unit is used for acquiring vehicle information and vehicle track information of all the login cloud platforms in a historical state; the current vehicle acquisition unit is used for acquiring vehicle information using the detection system in the current state and image data in a certain period of time.
The image analysis module comprises a characteristic point acquisition unit, a characteristic point analysis unit and a state discrimination unit;
the characteristic point acquisition unit is used for extracting characteristic information of each image acquired at different time by utilizing a characteristic extraction algorithm; the characteristic point analysis unit is used for analyzing the change rule of each image characteristic point under different time; the state judging unit is used for judging whether the state of the image characteristic points is static or dynamic according to the image change rule.
The autonomous learning module comprises a dynamic characteristic extraction unit, a data matching unit and a risk assessment unit;
the dynamic characteristic extraction unit is used for extracting the data confirmed as dynamic characteristic points to obtain dynamic characteristic data; the data matching unit is used for matching the dynamic characteristic data with the vehicle track information of all the illegal vehicles in the big data network to obtain a similar track set; the risk assessment unit is used for carrying out risk assessment on the vehicle information sets in the similar track set combined historical state, and analyzing the risk degree of the concerned vehicle running in the current state.
Embodiment one:
in step S100:
s110: collecting the vehicle information and corresponding vehicle track information of each violation occurrence in a big data network to respectively form a vehicle information set A= { a1, a2, …, a5000} and a history track set B= { B1, B2, …, B5000}, wherein a1, a2, …, a5000 represents information of the 1 st, 2, … and 5000 vehicles, B1, B2, … and B5000 represents all history track information of the 1 st, 2, … and 5000 vehicles; the vehicle information of the violation comprises vehicle information of an accident which does not occur in the violation and vehicle information of the violation and the accident which occurs; the vehicle information set comprises vehicle types, speed standards and the like; the set of historical trajectories includes vehicle speed as each trajectory is traveled, and safety conditions.
S120: the intelligent detection system recognizes that a certain concerned vehicle information av is registered, and then the wide-angle camera of the concerned vehicle av is utilized to collect image data shot under a certain time sequence to form an image data set: c= { C1, C2, …, C10}, where C1, C2, …, C10 represent image data captured at the 1 st, 2 nd, … th, 10 th time points.
In step S200:
and acquiring a vehicle information set and a historical track set, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing the data by using a digital signature algorithm.
In step S300:
s310: extracting a stored image data set C of the vehicle of interest aν, and characterizing 10 images in the image data set C by using an LBP (location based protocol) characteristic algorithm; acquiring 10 characterized image feature values to form a pixel point feature set D= { D1, D2, …, D1000}, wherein D1, D2, …, D1000 represent the 1 st, 2 nd, … th and 1000 th pixel feature values in 10 images; based on any pixel point di in the pixel characteristic set D, a pixel consistency formula is obtained: alpha = 1- +|! (d (i+k) -di), wherein k=1, 2, …,1000-i, if α=0, indicates that the pixel eigenvalues are consistent, they are classified into one class and are identified with the same color, otherwise, if α=1, they indicate that the pixel eigenvalues are different and are identified with different colors; wherein the color identification is selected according to the color library of blue, green, red, pink …;
s320: constructing a two-dimensional plane coordinate system for each piece of image characteristic data subjected to color marking, and respectively forming 10 image coordinate sets: l (j) = { (x 1, y 1), (x 2, y 2), …, (x 100, y 100) }, where j=1, 2, …,10, (x 1, y 1), (x 2, y 2), …, (x 100, y 100) represents 1,2, …,100 pixel coordinates of any jth image; traversing 10 image coordinate sets L (j), and judging the distance between any two feature points (xgamma, ygamma) with the same color (xdelta, ydelta): if- 2 +(yγ-yδ) 2 ]<0.5cm; merging the characteristic points with the same color and displaying the characteristic points with the same color as new characteristic points with the same color in the center of the area;
s330: extracting image feature data of a 10 th time point based on 10 image coordinate sets obtained after feature point fusion, and eliminating feature points which are not marked with the same color in other 9 previous images aiming at the color marks of the image feature points; further sequentially superposing the image data by utilizing a superposition algorithm to finally form a new image coordinate set Ls;
s340: in the process of acquiring image superposition, fitting the feature point coordinates of the same color marks in different original images by using a fitting algorithm to respectively form 5 feature point track sets: ls= {11,12, …,15}, wherein 11,12, …,15 represents the 1 st, 2 nd, … th, 5 th feature point trajectory; based on any one characteristic point track Lp in the characteristic point track set Ls, obtaining the similarity between the Lp and other characteristic point tracks: λp= |lp n lp+e|/|lp++e|, e=1, 2, …,5-p;
s350: analyzing the track similarity, and judging whether the state of each image characteristic point is static or dynamic: traversing 5 characteristic point tracks, if the similarity lambada p is not more than 0.85, indicating that the characteristic point track Lp is dissimilar to other characteristic point tracks, and judging that the characteristic point motion track has regularity, and marking the state of the characteristic point as dynamic characteristic point; otherwise, if the similarity lambdap is greater than 0.85, the characteristic point track Lp is similar to other characteristic point tracks, the state of the characteristic point is judged to be static, and the characteristic point is marked as a static characteristic point.
In step S400:
s410: constructing a two-dimensional plane coordinate system by taking the center of the superimposed image as an origin, and extracting the data confirmed as dynamic characteristic points in the image to obtain 3 characteristic point tracks; judging whether the directions of all the tracks are aggregated towards the y axis based on the 3 fitted characteristic point tracks, wherein the characteristic point tracks aggregated towards the y axis exist, and in order to further analyze the risk degree of the concerned vehicle, entering step S420;
s420: acquiring characteristic point tracks aggregated on a counter y axis to obtain 1 characteristic point track 12, respectively confirming corresponding vehicle information of each characteristic point track, setting the characteristic point track as a target vehicle, confirming that the running speed of the target vehicle is 0.5cm/ms by using 5/10=0.5 cm/ms, and further converting the running speed into an actual vehicle speed;
s430: analyzing the relationship between the concerned vehicle aν and all the vehicle information sets A with violations in the big data network:
Figure BDA0004132751570000132
Figure BDA0004132751570000131
indicating that the concerned vehicle does not have a violation event, acquiring the aggregation to the y axisAnd simultaneously, extracting a vehicle information set A= { a1, a2, …, a5000} and a history track set B= { B1, B2, …, B5000} of each violation-occurring vehicle information set A= { a1, a2, …, a5000} in the big data network, matching the characteristic point track 12 with the history track set B, analyzing the risk of the corresponding target vehicle of each characteristic point track by utilizing the matching result, and further analyzing the current risk degree of the concerned vehicle.
In step S440:
s441: extracting a characteristic point track 12 and a history track set B in a big data network, and comparing the similarity between the two tracks based on any one history track bf in the history track set B, wherein bf epsilon B: ρg= |12 n bf|/|12 n bf| traversing the history track set B, when the similarity ρg >0.7, representing that the characteristic point track 1g is similar to the history track bf, and recording that the number of similarity to the history track bf in the history track set B is 40; based on the similar number 40 of any one characteristic point track 12 and history track, extracting corresponding target vehicle information, and setting as ag; if 40/5000<0.02, the number of times of occurrence of the violation event of the target vehicle corresponding to the characteristic point track 12 in the big data network is small, and the risk is low;
s442: based on any target vehicle ag, 40 similar historical tracks are screened according to the vehicle information set A: acquiring corresponding vehicle information sets Ag= { a1, a2, …, a40} in the 40 historical tracks, traversing Ag, and comparing the similarity mug of the target vehicle Ag and any one vehicle information aq in the corresponding vehicle information sets Ag: mug= |ag ∈aq/|ag ∈ag, screening vehicle information with similarity mug > sigma, and recording the number of similar violation vehicles as 10, wherein the risk of the current target vehicle is 10/5000=0.002;
S444: extracting a risk value of a target vehicle, and confirming that the risk degree of the concerned vehicle aν in the current state is (1/3) x 0.002; if the risk level is >1/24, the safety of the concerned vehicle is threatened.
In step S500:
and when the dangerous degree of the concerned vehicle is analyzed to be more than 1/24, reminding is carried out, and the target vehicle information with highest danger is displayed by utilizing intelligent voice.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. An intelligent detection method for driving safety based on artificial intelligence is characterized in that: the method comprises the following steps:
s100: collecting all vehicle information and corresponding vehicle track information of violations in a big data network to form a vehicle information set and a history track set, collecting vehicle information logged in an intelligent detection system in a current state, setting the vehicle as a concerned vehicle, and collecting image data of the concerned vehicle in a certain time period acquired by using a wide-angle camera to form an image data set;
s200: acquiring a vehicle information set and a historical track set, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm;
s300: extracting a stored image data set, analyzing the change rule of each image characteristic point at different time from each image in the image data set, and further judging whether the state of each image characteristic point is static or dynamic;
s400: fitting the feature point tracks confirmed to be dynamic feature points, constructing a two-dimensional plane coordinate system, extracting feature points aggregated towards a y axis, matching dynamic feature data with the historical track set according to the relation between the concerned vehicle and the vehicle information set, analyzing the dangerousness of each dynamic feature point by using the matching result, and further analyzing the dangerousness degree of the concerned vehicle in the current state;
S500: and reminding the situation that the dangerous degree of the concerned vehicle exceeds a threshold value.
2. The intelligent detection method for traffic safety based on artificial intelligence according to claim 1, wherein the method comprises the following steps: the step S100 includes:
s110: collecting the information of each vehicle with violations and corresponding track information of the vehicle in a big data network to form a vehicle information set A= { a1, a2, …, an } and a history track set B= { B1, B2, …, bn }, wherein a1, a2, …, an represents information of the 1 st, 2, … th and n th vehicles, and B1, B2, …, bn represents all history track information of the 1 st, 2 nd, … th and n th vehicles;
s120: the intelligent detection system recognizes that a certain concerned vehicle information av is registered, and then the wide-angle camera of the concerned vehicle av is utilized to collect image data shot under a certain time sequence to form an image data set: c= { C1, C2, …, cm }, where C1, C2, …, cm represent image data taken at the 1 st, 2 nd, … th, m th time points.
3. The intelligent detection method for traffic safety based on artificial intelligence according to claim 2, wherein the method comprises the following steps: the step S300 includes:
s310: extracting a stored image data set C of the vehicle of interest aν, and characterizing m images in the image data set C by using an LBP (location based protocol) characteristic algorithm; obtaining m characterized image feature values to form a pixel point feature set D= { D1, D2, …, dh }, wherein D1, D2, …, dh represents 1 st, 2 nd … th pixel feature values in m images; based on any pixel point di in the pixel characteristic set D, a pixel consistency formula is obtained: alpha = 1- +|! (d (i+k) -di), wherein k=1, 2, …, h-i, if α=0, indicates that the pixel eigenvalues are consistent, they are classified into one class and are identified with the same color, otherwise, if α=1, they indicate that the pixel eigenvalues are different and are identified with different colors;
S320: constructing a two-dimensional plane coordinate system for each piece of image characteristic data subjected to color marking, and respectively forming m image coordinate sets: l (j) = { (x 1, y 1), (x 2, y 2), …, (xz, yz) }, where j=1, 2, …, m,
(x 1, y 1), (x 2, y 2), …, (xz, yz) represents the 1,2, …, z pixel coordinates of any jth image; traversing m image coordinate sets L (j), and judging the distance between any two feature points (xgamma, ygamma) with the same color (xdelta, ydelta): if- 2 +(yγ-yδ) 2 ]<Beta; wherein, beta represents a distance threshold value, the characteristic points with the same color are fused and displayed as new characteristic points with the same color by the center of the area;
s330: extracting image feature data of an mth time point based on m image coordinate sets obtained after feature point fusion, and eliminating feature points which are not marked with the same color in other previous m-1 images aiming at the color marks of the image feature points; further sequentially superposing the image data by utilizing a superposition algorithm to finally form a new image coordinate set Ls;
s340: in the process of acquiring image superposition, fitting the characteristic point coordinates of the same color mark in different original images by using a fitting algorithm to respectively form U characteristic point track sets: ls= { l1, l2, …, lU }, where l1, l2, …, lU represents the 1 st, 2 nd, … th, U-th feature point track; based on any one characteristic point track Lp in the characteristic point track set Ls, obtaining the similarity between the Lp and other characteristic point tracks: λp= |lp n lp+e|/|lp++e|, e=1, 2, …, U-p;
S350: analyzing the track similarity, and judging whether the state of each image characteristic point is static or dynamic: if the similarity lambdap is not larger than eta, wherein eta represents a similarity threshold value, and the characteristic point track Lp and other characteristic point tracks are dissimilar, judging the state of the characteristic point as dynamic, and marking as dynamic characteristic points; otherwise, if the similarity lambdap > eta exists, the static feature point is marked.
4. The intelligent detection method for traffic safety based on artificial intelligence according to claim 3, wherein the method comprises the following steps: the step S400 includes:
s410: constructing a two-dimensional plane coordinate system by taking the center of the superimposed image as an origin, and extracting the data confirmed as dynamic characteristic points in the image to obtain u characteristic point tracks; judging whether the directions of all the tracks are aggregated towards the y axis based on the u characteristic point tracks after fitting, and if the characteristic point tracks aggregated towards the y axis do not exist, confirming that the current dangerous degree of the concerned vehicle is 0; otherwise, if there is a feature point track converging toward the y-axis, the process proceeds to step S420 in order to further analyze the risk level of the concerned vehicle;
s420: acquiring characteristic point tracks aggregated on a counter y axis to obtain s characteristic point tracks, respectively confirming corresponding vehicle information of each characteristic point track, setting the corresponding vehicle information as a target vehicle, and confirming the running speed of each target vehicle by utilizing a track length/time sequence m;
S430: analyzing the relationship between the concerned vehicle aν and all the vehicle information sets A with violations in the big data network: if av epsilon A shows that the violation event exists in the concerned vehicle, confirming that the current dangerous degree of the concerned vehicle av is s/u;
s440: if it is
Figure FDA0004132751550000031
If the notice vehicle does not generate a violation event, acquiring a feature point track 1g and corresponding target vehicle information which are aggregated in the y axis direction from any one of s feature point tracks, extracting a vehicle information set A= { a1, a2, …, an } and a history track set B= { B1, B2, …, bn } of each violation occurrence in a big data network, and extracting feature pointsAnd matching the track 1g with the historical track set B, analyzing the risk of the target vehicle corresponding to each characteristic point track by utilizing the matching result, and further analyzing the current risk degree of the concerned vehicle.
5. The intelligent detection method for traffic safety based on artificial intelligence according to claim 4, wherein the method comprises the following steps: the step S440 includes:
s441: extracting any characteristic point track 1g aggregated towards the y axis and a history track set B in a big data network, and comparing the similarity between the two tracks based on any history track bf in the history track set B, wherein bf epsilon B: ρg= |1g+|bf|/| 1g bf|, traversing the history track set B, when the similarity ρg > delta, wherein delta is a data similarity threshold value, representing that the characteristic point track 1g is similar to the history track bf, and recording that the number of similarity to the history track bf in the history track set B is epsilon g; based on the similar number epsilon g of any one characteristic point track lg and history tracks, extracting corresponding target vehicle information, and setting as ag; if epsilon g/n is larger than phi, wherein phi represents a quantity proportion threshold value, and represents that the number of times of the violation event of the target vehicle corresponding to the characteristic point track 1g in the big data network is large, and the risk is high; otherwise, if epsilon g/n is less than phi, the number of times of the violation event of the target vehicle corresponding to the characteristic point track 1g in the big data network is less, and the risk is low;
S442: screening epsilon g similar historical tracks according to the vehicle information set A based on any target vehicle ag: corresponding vehicle information sets Ag= { a1, a2, …, aεg } in the epsilon g historical tracks are obtained, ag is traversed, and the similarity mug of the target vehicle Ag and any one vehicle information aq in the corresponding vehicle information sets Ag is compared: mug= |ag ∈aq/|ag ∈ag, screening vehicle information with similarity mug > sigma, and recording the number of similar violation vehicles as ωg, wherein ωg is less than or equal to εg; at this time, the danger of the current target vehicle is omega g/n;
s443: traversing the target vehicles corresponding to the S feature point tracks aggregated towards the y axis based on the dangerous condition omega g/n of any target vehicle ag, and obtaining a dangerous set by analyzing the dangerous condition in the same steps S441 and S442: { ω1/n, ω2/n, …, ωs/n }, wherein ω1/n, ω2/n, …, ωs/n represents the risk value of the 1 st, 2 nd, … th, s-th target vehicle;
s444: extracting risk values of the 1 st, 2 nd, … th and s th target vehicles respectively, and confirming that the risk degree of the concerned vehicle aν in the current state is (s/u) [ Sigma ] s g=1 ωg/(s*n)]The method comprises the steps of carrying out a first treatment on the surface of the If the degree of danger is high>(s/u)/8, indicating that the safety of the concerned vehicle is threatened.
6. A driving safety intelligent detection system for implementing the driving safety intelligent detection method based on artificial intelligence according to any one of claims 1 to 5, characterized in that: the system comprises: the system comprises a data acquisition module, a database, an image analysis module, an autonomous learning module and an intelligent reminding module;
collecting all vehicle information with violations and corresponding vehicle track information in a big data network through the data collecting module to form a vehicle information set and a history track set, collecting vehicle information logged in an intelligent detection system in a current state, setting the vehicle as a concerned vehicle, and collecting image data of the concerned vehicle in a certain time period obtained by using a wide-angle camera to form an image data set;
acquiring a vehicle information set and a historical track set through the database, acquiring vehicle information logged in a cloud platform and a corresponding image data set thereof, and encrypting and storing data by using a digital signature algorithm;
extracting a stored image data set through the image analysis module, analyzing the change rule of each image characteristic point at different time from each image in the image data set, and further judging whether the state of each image characteristic point is static or dynamic;
Fitting the feature point tracks confirmed to be dynamic feature points through the autonomous learning module, constructing a two-dimensional plane coordinate system, extracting feature points aggregated towards a y axis, matching dynamic feature data with the historical track set according to the relation between the concerned vehicle and the vehicle information set, analyzing the dangerousness of each dynamic feature point by utilizing the matching result, and further analyzing the dangerousness degree of the concerned vehicle in the current state;
and reminding the situation that the dangerous degree of the concerned vehicle exceeds a threshold value through the intelligent reminding module.
7. The intelligent detection system for driving safety according to claim 6, wherein: the data acquisition module comprises a historical vehicle acquisition unit and a current vehicle acquisition unit;
the historical vehicle acquisition unit is used for acquiring vehicle information and vehicle track information of all the login cloud platforms in a historical state; the current vehicle acquisition unit is used for acquiring vehicle information using the detection system in the current state and image data in a certain period of time.
8. The intelligent detection system for driving safety according to claim 6, wherein: the image analysis module comprises a characteristic point acquisition unit, a characteristic point analysis unit and a state discrimination unit;
The characteristic point acquisition unit is used for extracting characteristic information of each image acquired at different time by utilizing a characteristic extraction algorithm; the characteristic point analysis unit is used for analyzing the change rule of each image characteristic point under different time; the state judging unit is used for judging whether the state of the image characteristic points is static or dynamic.
9. The intelligent detection system for driving safety according to claim 6, wherein: the autonomous learning module comprises a dynamic characteristic extraction unit, a data matching unit and a risk assessment unit;
the dynamic characteristic extraction unit is used for extracting the data confirmed as dynamic characteristic points to obtain dynamic characteristic data; the data matching unit is used for matching the dynamic characteristic data with the vehicle track information of all the illegal vehicles in the big data network to obtain a similar track set; the risk assessment unit is used for carrying out risk assessment on the vehicle information sets in the similar track set combined historical state, and analyzing the risk degree of the concerned vehicle running in the current state.
CN202310264929.4A 2023-03-17 2023-03-17 Intelligent driving safety detection system and method based on artificial intelligence Pending CN116342651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310264929.4A CN116342651A (en) 2023-03-17 2023-03-17 Intelligent driving safety detection system and method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310264929.4A CN116342651A (en) 2023-03-17 2023-03-17 Intelligent driving safety detection system and method based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN116342651A true CN116342651A (en) 2023-06-27

Family

ID=86885168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310264929.4A Pending CN116342651A (en) 2023-03-17 2023-03-17 Intelligent driving safety detection system and method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116342651A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958084A (en) * 2023-07-20 2023-10-27 上海韦地科技集团有限公司 Intelligent image sensing system and method based on large nuclear industry data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958084A (en) * 2023-07-20 2023-10-27 上海韦地科技集团有限公司 Intelligent image sensing system and method based on large nuclear industry data

Similar Documents

Publication Publication Date Title
TWI430212B (en) Abnormal behavior detection system and method using automatic classification of multiple features
WO2019223655A1 (en) Detection of non-motor vehicle carrying passenger
CN110942623B (en) Auxiliary traffic accident handling method and system
CN110895662A (en) Vehicle overload alarm method and device, electronic equipment and storage medium
Battiato et al. On-board monitoring system for road traffic safety analysis
CN107004353B (en) Traffic violation management system and traffic violation management method
CN107329977B (en) A kind of false-trademark vehicle postsearch screening method based on probability distribution
CN111524350B (en) Method, system, terminal device and medium for detecting abnormal driving condition of vehicle and road cooperation
CN112071084A (en) Method and system for judging illegal parking by utilizing deep learning
CN113658427A (en) Road condition monitoring method, system and equipment based on vision and radar
CN116342651A (en) Intelligent driving safety detection system and method based on artificial intelligence
CN113807588A (en) Traffic accident-based driving path planning method and device
Wang et al. Advanced driver‐assistance system (ADAS) for intelligent transportation based on the recognition of traffic cones
CN116110012B (en) Dangerous violation identification method and system for intelligent construction site
CN111985295A (en) Electric bicycle behavior recognition method and system, industrial personal computer and camera
CN114724122A (en) Target tracking method and device, electronic equipment and storage medium
CN112818724A (en) Method, device, equipment and storage medium for identifying non-programmed driving or non-programmed escort
CN107067778A (en) A kind of vehicle antitracking method for early warning and device
CN111383248A (en) Method and device for judging red light running of pedestrian and electronic equipment
Van Hinsbergh et al. Vehicle point of interest detection using in-car data
CN115440071B (en) Automatic driving illegal parking detection method
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN112597924B (en) Electric bicycle track tracking method, camera device and server
CN115019263A (en) Traffic supervision model establishing method, traffic supervision system and traffic supervision method
CN114187585A (en) Intelligent identification method and system for shielded license plate vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination