CN106373430B - Intersection traffic early warning method based on computer vision - Google Patents

Intersection traffic early warning method based on computer vision Download PDF

Info

Publication number
CN106373430B
CN106373430B CN201610735587.XA CN201610735587A CN106373430B CN 106373430 B CN106373430 B CN 106373430B CN 201610735587 A CN201610735587 A CN 201610735587A CN 106373430 B CN106373430 B CN 106373430B
Authority
CN
China
Prior art keywords
vehicle
early warning
image
moving target
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610735587.XA
Other languages
Chinese (zh)
Other versions
CN106373430A (en
Inventor
杜娟
徐晟�
李彤彤
刘凌菁
张晓荣
朱殿臣
王珺
邓逸川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201610735587.XA priority Critical patent/CN106373430B/en
Publication of CN106373430A publication Critical patent/CN106373430A/en
Application granted granted Critical
Publication of CN106373430B publication Critical patent/CN106373430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a computer vision-based intersection traffic early warning method, which comprises the following steps of S1, acquiring a video image of an intersection in real time, and snapshotting a close-up image of a vehicle; s2, extracting a moving target according to the video image, generating moving target traveling information and extracting complete vehicle license plate information according to the close-up image; s3, classifying the moving target to obtain a classification result; s4, calculating the traveling speed of the vehicle according to the classification result and the traveling information; s5, storing vehicle information passing through the intersection, and predicting the centroid coordinate of the position of the moving target at the next moment according to the traveling speed and the traveling information of the vehicle; and S6, generating an early warning signal and displaying early warning information according to the condition that the traveling information, the traveling speed and the centroid coordinate of the position of the moving target at the next moment are in accordance with the abnormal condition. The abnormal traffic condition of the construction intersection is analyzed by utilizing a video image analysis technology, and an early warning signal is output, so that the stability is strong, and the accuracy is high.

Description

Intersection traffic early warning method based on computer vision
Technical Field
The invention relates to the field of traffic safety early warning, in particular to a computer vision-based intersection traffic early warning method.
Background
In construction district road, especially intersection, is the most dangerous region in the road network, because the construction site is opened up for temporarily, signal lamp and road sign line are rare, and the pedestrian is not concentrated, and construction vehicle is the oversize vehicle mostly, and the driving in of external vehicle has become the many origins of traffic accidents. When pedestrians and vehicles pass through an intersection, visual blind areas are often generated between drivers and pedestrians due to the fact that a construction building is constructed, external vehicles run to a construction specific road section to violate construction management regulations, and the vehicle speed exceeds a safety value of the construction road section, all of which can possibly cause traffic accidents. The traditional traffic early warning system is not completely suitable for geographical conditions such as construction sites, the early warning range is not wide enough, the technical means is single, and therefore a construction section intersection traffic early warning system and method based on computer vision are provided.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides an intersection traffic early warning method based on computer vision, which is particularly applied to early warning when abnormal conditions occur to vehicles and pedestrians passing through an intersection at a construction section, so as to prevent traffic accidents.
The invention adopts the following technical scheme:
a crossroad traffic early warning method based on computer vision comprises the following steps:
s1, collecting video images of a crossroad in real time, and capturing close-up images of vehicles;
s2, extracting a moving target according to the video image, generating moving target traveling information and extracting complete vehicle license plate information according to the close-up image;
s3, classifying the moving target to obtain a classification result;
s4, calculating the traveling speed of the vehicle according to the classification result and the traveling information;
s5, storing vehicle information passing through the intersection, and predicting the centroid coordinate of the position of the moving object at the next moment according to the traveling speed and the traveling information of the vehicle;
s6, according to the traveling information, the traveling speed and the centroid coordinate of the position of the moving object at the next moment, and generating an early warning signal and displaying early warning information under the condition of meeting the abnormal condition.
The moving objects comprise vehicles and pedestrians, and the traveling information comprises the time period when the vehicles or the pedestrians pass through the intersection, the mass center coordinates of the vehicles or the pedestrians and the traveling direction.
The method for extracting the moving object according to the video image specifically comprises the following steps:
s2.1, establishing a mixed Gaussian model for the video image of the intersection scene, extracting a foreground target, and generating a binary moving target foreground image;
s2.2, counting the number of pixel points of each moving target and the image coordinates of the pixel points, and calculating the image coordinates of the mass center of the moving target
Figure BDA0001093240470000021
/>
Figure BDA0001093240470000022
M, N respectively represents the maximum width and height of a moving object in the image, i represents the abscissa of a pixel point in the image, j represents the ordinate of a pixel point in the image, and x represents the maximum width and height of a moving object in the image ij Abscissa value, y, representing a pixel in a moving object ij Expressing the longitudinal coordinate value of the moving target pixel point;
s2.3, calculating a motion vector of the moving object according to the image coordinates of the mass center of the moving object in the adjacent 20 frames of images, determining the traveling direction of the moving object according to the motion vector, and uniformly dividing the traveling direction according to different intersections.
S3, classifying the moving target to obtain a classification result, wherein the classification result comprises an engineering vehicle, a non-engineering vehicle and a pedestrian, and the specific steps comprise:
s3.1, extracting the contour of the moving target, drawing a circumscribed rectangle surrounding the contour according to the contour, and preliminarily distinguishing a vehicle target and a pedestrian target according to the aspect ratio of the circumscribed rectangle of the moving target;
if the aspect ratio of the circumscribed rectangle of the moving object is smaller than a set experience threshold, the moving object at the moment is considered to be a pedestrian;
if the aspect ratio of the circumscribed rectangle of the moving target is larger than a set experience threshold value, the moving target at the moment is considered to be an engineering vehicle, a non-engineering vehicle or an adhesion pedestrian target formed by adhesion of a plurality of pedestrians;
and S3.2, establishing a classifier model by utilizing an algorithm of a support vector machine according to the directional gradient histogram characteristics of the moving target to distinguish the engineering vehicle, the non-engineering vehicle and the adhered pedestrian target.
The S3.2 specifically comprises the following steps:
s3.2.1 collecting images of engineering vehicles, non-engineering vehicles and adhered pedestrians, unifying the sizes of the images, respectively establishing sample sets, and setting the engineering vehicle sample set as a set A, the non-engineering vehicle sample set as a set B and the adhered pedestrian sample set as a set C;
s3.2.2 takes a set A and a set B in S3.2.1 as a positive set and a set C as a negative set, respectively extracts the histogram features of the directional gradients of the positive set and the negative set as the input of a support vector machine algorithm, and generates a first classifier model, wherein the first classifier model is used for carrying out secondary classification on vehicles and pedestrians;
s3.2.3 takes a set A in S3.2.1 as a positive set and a set B as a negative set, and respectively extracts the directional gradient histogram features of the positive set and the negative set as the input of a support vector machine algorithm to generate a second classifier model, wherein the second classifier model is used for carrying out second classification on engineering vehicles and non-engineering vehicles;
s3.2.4 extracting corresponding original RGB images of the moving objects according to the extracted circumscribed rectangle of the moving objects, and extracting directional gradient histogram features of the original RGB images of the moving objects;
s3.2.5, sequentially taking the directional gradient histogram characteristics of the original RGB images of each moving target as the input of a classifier model I, if the output result is positive, considering the moving target at the moment as a vehicle, and if the output result is negative, considering the moving target at the moment as an adhered pedestrian;
s3.2.6 uses the histogram feature of the directional gradient of the moving object of S3.2.5 whose classification result is vehicle as the input of classifier model two, and if the output result is positive, the moving target at the moment is considered as the engineering vehicle, and if the output result is negative, the moving target at the moment is considered as the non-engineering vehicle.
And in the S4, the traveling speed of the vehicle is calculated according to the classification result and the traveling information, and the method specifically comprises the following steps:
s4.1, selecting a vehicle target as an object for calculating the speed according to the classification result, and not calculating the advancing speed of the pedestrian;
s4.2: in a video image, setting two virtual detection lines in a direction perpendicular to the advancing direction of a vehicle target; then measuring the distance delta D of the two virtual detection lines on the corresponding actual road; calculating the frame number F of the vehicles in the real-time video image which successively reach the two virtual detection lines;
s4.3: according to the sampling frequency F of the video image, the distance delta D between the two virtual detection lines corresponding to the actual road, the frame number F of the vehicle successively arriving at the two virtual detection lines, and the advancing speed V of the vehicle are calculated:
Figure BDA0001093240470000031
and S5, specifically, predicting the centroid coordinate of the next moment position of the moving target by adopting a Kalman filtering algorithm.
The exception condition includes the following:
generating early warning signal when vehicle running speed exceeds speed value specified by construction site
Judging whether the vehicle is in a special area of a construction site or drives into the special area according to the position coordinates of the vehicle and the centroid coordinates of the predicted position of the vehicle at the next moment so as to generate an early warning signal, wherein the special area refers to a road section which is regulated by construction site management and is forbidden to drive by a non-engineering vehicle;
and judging whether the traffic of the vehicles and the pedestrians and the vehicles and the pedestrians forms a vision blind area or not according to the traveling information of the vehicles and the pedestrians so as to generate an early warning signal.
The early warning information comprises a monitoring image for displaying overspeed vehicles and sending out early warning voice, a monitoring image for non-engineering vehicles driving into a special area of a construction site and sending out early warning voice, a scene image for displaying the possibility of collision between the vehicles and pedestrians and between the vehicles and a prompting slogan, and the early warning voice and the prompting slogan.
The panoramic camera collects video images of the intersection, and the close-up camera captures close-up images of the vehicle.
The invention has the beneficial effects that:
(1) The engineering vehicle and the non-engineering vehicle can be effectively distinguished, so that automatic and efficient supervision is realized, different control measures can be conveniently taken for different vehicles, and convenience is provided for safety management of a construction site;
(2) The position of the moving target at the next moment can be estimated, so that the vehicle with the possibility of breaking into the forbidden area of the construction site is alarmed, and the dangerous condition is avoided;
(3) The system can detect vehicles and pedestrians of which the passing direction forms a vision blind area, and display safety prompt information to enable a driver to have sufficient time to take speed reduction or parking measures;
(4) The vehicles passing through the construction site, no matter the vehicles are construction vehicles or foreign vehicles, are subjected to information management, a database is established, and information such as the time when the vehicles pass through the construction site, the types of the vehicles, license plates and the like is stored.
(5) A high definition camera frame for detecting construction zone scene establishes in the sky on road surface, need not bury equipment underground, and installation and maintenance cost are lower.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a field installation view of a structure of an embodiment of the present invention;
FIG. 3 is a schematic diagram of a display content of an LED display screen according to an embodiment of the present invention;
fig. 4 is a flow chart of the operation of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Examples
As shown in fig. 1, 2 and 3, a computer vision-based intersection traffic early warning system for implementing the present invention includes a front-end acquisition module, a video information processing module, a network transmission module, a monitoring module and an early warning module;
the front end acquisition module comprises a panoramic camera 2, a close-up camera 1 and video coding equipment, the panoramic camera and the close-up camera are respectively connected with the video coding equipment, a virtual coil is arranged in a specific area of the intersection, the view field of the close-up camera is aligned to the virtual coil 7, and the view field of the panoramic camera comprises view angles in all directions of the intersection;
the video information processing module is specifically an industrial personal computer 3, the industrial personal computer is respectively connected with the monitoring module and the early warning module through the network transmission module, and the video coding device is connected with the industrial personal computer.
<xnotran> 5 6 ; </xnotran>
The early warning module including an LED display screen 8.
The model of the industrial personal computer is IPC610.
The model of the LED display screen is CSD-P6-SMD3535, the resolution of the double backup power lines is above 720P.
The network transmission module is a wireless network transmission device 4.
The panoramic camera and the close-up camera are erected in the air on the corner road surface of the crossroad or the T-shaped intersection.
In this embodiment, the virtual coil is rectangular.
Including two feature cameras and a panorama camera in this embodiment, the LED has two display screens, and the vehicle that shows information can make things convenient for on each road and pedestrian observe, as shown in fig. 3.
The panoramic camera collects scene information of the intersection in a large range and outputs the scene information to the industrial personal computer for processing video images; arranging a virtual coil at a specific area at the intersection, aligning the field of view of the close-up camera with the specific area at the intersection, high-definition images of the vehicles are captured and output to an industrial personal computer for image processing; software installed on the industrial personal computer completes detection, tracking, classification, position prejudgment, vehicle license number identification, vehicle information storage, early warning signal output and the like of vehicles and pedestrians by using video processing and pattern recognition technologies; the output end of the signal is provided with an LED display screen, a database server and an information terminal display device. The LED display screen is used for displaying early warning information, the database server is used for storing vehicle information passing through the construction section intersection, and the information terminal display equipment is used for displaying monitoring pictures of the intersection in real time and effect diagrams which are synchronous with the monitoring pictures and achieve functions of detection, tracking, classification, position prejudgment, early warning and the like of vehicles and pedestrians.
As shown in fig. 4, a computer vision-based intersection traffic early warning method includes the following steps:
s1, a panoramic camera collects video images of vehicles and pedestrians passing through an intersection in real time, and a close-up camera captures a close-up image of each vehicle in a virtual coil;
s2, extracting a moving target according to the video image, generating moving target traveling information and extracting complete vehicle license plate information according to the close-up image, wherein the moving target comprises a vehicle and a pedestrian, and the traveling information comprises a time period when the vehicle or the pedestrian passes through the intersection, and a centroid coordinate and a traveling direction of the vehicle or the pedestrian.
The vehicle license plate information is obtained by license plate positioning, character segmentation, character recognition and color recognition of the close-up image.
S2.1, establishing a mixed Gaussian model for the video image of the intersection scene, extracting a foreground target, and generating a binary moving target foreground image;
s2.2, counting the number of pixel points of each moving target and the image coordinates of the pixel points, and calculating the image coordinates of the mass center of the moving target
Figure BDA0001093240470000061
/>
Figure BDA0001093240470000062
M, N respectively represents the maximum width and height of a moving object in the image, i represents the abscissa of a pixel point in the image, j represents the ordinate of a pixel point in the image, and x represents the maximum width and height of a moving object in the image ij Abscissa value, y, representing pixel points in moving object ij A longitudinal coordinate value representing a moving target pixel point;
and S2.3, calculating a motion vector of the moving target according to the image coordinates of the centroid of the moving target in the adjacent 20 frames of images, determining the advancing direction of the moving target according to the motion vector, and uniformly dividing the advancing direction according to different intersections.
S3, classifying the moving target to obtain a classification result;
the classification result comprises an engineering vehicle, a non-engineering vehicle and a pedestrian, and the method comprises the following specific steps:
s3.1, extracting the contour of the moving target, drawing a circumscribed rectangle surrounding the contour according to the contour, preliminarily distinguishing a vehicle target and a pedestrian target according to the aspect ratio of a circumscribed rectangle of the moving target;
if the aspect ratio of the circumscribed rectangle of the moving object is smaller than a set experience threshold, the moving object at the moment is considered to be a pedestrian;
if the aspect ratio of the circumscribed rectangle of the moving target is larger than a set experience threshold value, the moving target at the moment is considered to be an engineering vehicle, a non-engineering vehicle or an adhesion pedestrian target formed by adhesion of a plurality of pedestrians;
and S3.2, establishing a classifier model by utilizing an algorithm of a support vector machine according to the directional gradient histogram characteristics of the moving target to distinguish the engineering vehicle, the non-engineering vehicle and the adhered pedestrian target.
S3.2.1 collecting images of engineering vehicles, non-engineering vehicles and adhered pedestrians, unifying the sizes of the images, respectively establishing sample sets, and setting the engineering vehicle sample set as a set A, the non-engineering vehicle sample set as a set B and the adhered pedestrian sample set as a set C;
s3.2.2 takes a set A and a set B in S3.2.1 as a positive set and a set C as a negative set, respectively extracts the histogram features of the directional gradients of the positive set and the negative set as the input of a support vector machine algorithm, and generates a first classifier model, wherein the first classifier model is used for carrying out secondary classification on vehicles and pedestrians;
s3.2.3 takes a set A in S3.2.1 as a positive set and a set B as a negative set, and respectively extracts the directional gradient histogram features of the positive set and the negative set as the input of a support vector machine algorithm to generate a second classifier model, wherein the second classifier model is used for carrying out second classification on engineering vehicles and non-engineering vehicles;
s3.2.4 extracting the corresponding original RGB image of each moving object according to the extracted circumscribed rectangle of each moving object, extracting the directional gradient histogram characteristics of the original RGB images of all moving targets;
s3.2.5, sequentially taking the directional gradient histogram characteristics of the original RGB images of each moving target as the input of a classifier model I, if the output result is positive, considering the moving target at the moment as a vehicle, and if the output result is negative, considering the moving target at the moment as an adhered pedestrian;
s3.2.6 uses the histogram feature of the directional gradient of the moving object of S3.2.5 whose classification result is the vehicle as the input of the classifier model two, and if the output result is positive, the moving object at that time is considered as a working vehicle, and if the output result is negative, the moving object at that time is considered as a non-working vehicle.
S4, calculating the traveling speed of the vehicle according to the classification result and the traveling information;
s4.1, selecting a vehicle target as an object for calculating the speed according to the classification result, the traveling speed of the pedestrian is not calculated;
s4.2: in the video image, two virtual detection lines are arranged in the direction perpendicular to the vehicle target advancing direction; re-measuring the correspondence of two virtual detection lines distance Δ D on the actual road; calculating the number F of frames of vehicles in the real-time video image which successively reach two virtual detection lines;
s4.3: according to the sampling frequency F of the video image, the distance delta D of the two virtual detection lines corresponding to the actual road, the frame number F of the vehicle successively arriving at the two virtual detection lines, and the advancing speed V of the vehicle are calculated:
Figure BDA0001093240470000071
s5, storing vehicle information passing through the intersection, and predicting the centroid coordinate of the position of the moving object at the next moment according to the traveling speed and the traveling information of the vehicle;
and S5, specifically, predicting the centroid coordinate of the position of the moving target at the next moment by adopting a Kalman filtering algorithm. Kalman filtering is a recursive estimation and is divided into a prediction stage and an update stage, and in the prediction stage, a Kalman filtering algorithm uses the estimation of the state at the current moment to estimate the state at the next moment; in the updating stage, the Kalman filtering algorithm optimizes the predicted value obtained in the prediction stage by using the observed value of the next moment state so as to obtain a more accurate new estimation value.
The method specifically comprises the following steps:
s5.1, obtaining the image coordinates of the centroid of the moving object at the previous moment in the video image and the moving speed of the centroid on the image, and establishing a prediction equation of the position of the moving object, namely:
X(t+1|t)=AX(t|t)+w(t+1)
<xnotran> : </xnotran> X (t + 1|t) is a state vector of the moving target at the next moment predicted by using the current moment; x (t | t) is an optimal state estimation vector at the current moment; a is a state transfer matrix; w (t + 1) is process noise, assuming white noise that is expected to be zero, its covariance matrix is Q (t + 1);
the moving speed v of the centroid of the moving object on the image is specifically calculated by the following formula:
Figure BDA0001093240470000081
where t is the time interval between two moments and Δ d is the distance traveled by the centroid of the moving object at time t.
S5.2: update covariance matrix for state X (t + 1|t):
P(t+1|t)=AP(t|t)A T +Q(t+1)
wherein: p (t + 1|t) represents the covariance of X (t + 1|t), P (t | t) represents the covariance of X (t | t), A T The transpose matrix of a is represented,
s5.3: calculating an optimized estimated value X (t + 1) of the moving target state at the next moment according to the measured value of the moving target state at the next moment and the predicted state vector of the moving target at the next moment:
X(t+1|t+1)=X(t+1|t)+Kg(t+1)(Z(t+1)-HX(t+1|t))
wherein Z (t + 1) is the measured value of the next moment state of the moving object, H is the measurement matrix, wherein Kg (t + 1) is Kalman gain:
Kg(t+1)=P(t+1|t)H T /(HP(t+1|t)H T +R(t+1))
where R (t + 1) is the measurement noise covariance matrix, H T Of H transposing a matrix;
s5.4: updating covariance matrix P (t +1 syncy) of moving target state X (t +1 syncy) t +1 at next moment:
P(t+1|t+1)=(I-Kg(t+1)H)P(t+1|t)
where I is the identity matrix.
The abnormal conditions mainly comprise that the speed of the vehicle exceeds the speed specified by a construction site, a non-engineering vehicle drives into a special area of the construction site, the passing directions between vehicles and pedestrians passing at the intersection form a vision blind area, and the possibility of collision between the vehicles and between the vehicles and the pedestrians exists;
the special area of the construction site refers to a road section which is forbidden to be driven by a non-engineering vehicle specified by construction site management;
the early warning module generates corresponding warning information according to an early warning signal output by the industrial personal computer, and comprises the following conditions: (1) Displaying a monitoring image of the overspeed vehicle on information terminal display equipment and sending out early warning voice; (2) Displaying a monitoring image of a non-engineering vehicle driving into a special area of a construction site on information terminal display equipment and sending out early warning voice; (3) The scene images of the vehicles and pedestrians and the scene images of the vehicles and the pedestrians when collision possibility exists are displayed on the information terminal equipment, early warning voice is sent out, and prompt slogans are displayed on the LED screen.
Fig. 2 is a schematic diagram of an installation structure of an LED information screen of the warning module in the embodiment of the present invention, as shown in the road scene schematic diagram of fig. 1, a visual blind area occurs in a traffic direction of a vehicle and a pedestrian due to a building area of a construction road section, and at this time, the warning module outputs warning information to the LED display screen to remind the vehicle and the pedestrian of safety.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. Computer vision-based an early warning method for the traffic of the crossroad, the method is characterized by comprising the following steps:
s1, acquiring a video image of an intersection in real time, and capturing a close-up image of a vehicle;
s2 according to the video image the moving object is extracted and the moving object is extracted, generating moving target traveling information and extracting complete vehicle license plate information according to the close-up image;
s3, classifying the moving target to obtain a classification result;
s4, calculating the traveling speed of the vehicle according to the classification result and the traveling information;
s5, storing vehicle information passing through the intersection, and predicting the centroid coordinate of the position of the moving object at the next moment according to the traveling speed and the traveling information of the vehicle;
s6, generating an early warning signal and displaying early warning information according to the traveling information, the traveling speed and the mass center coordinate of the position of the moving target at the next moment when the condition meets the abnormal condition;
the classification result comprises engineering vehicles, non-engineering vehicles and pedestrians, and the method specifically comprises the following steps:
s3.1, extracting the contour of the moving target, drawing a circumscribed rectangle surrounding the contour according to the contour, and preliminarily distinguishing a vehicle target and a pedestrian target according to the aspect ratio of the circumscribed rectangle of the moving target;
if the aspect ratio of the circumscribed rectangle of the moving object is smaller than a set empirical threshold, the moving object at the moment is considered as a pedestrian;
if the aspect ratio of the circumscribed rectangle of the moving target is larger than a set experience threshold, the moving target at the moment is considered to be an engineering vehicle, a non-engineering vehicle or an adhesion pedestrian target formed by adhesion of a plurality of pedestrians;
s3.2, establishing a classifier model by utilizing an algorithm of a support vector machine according to the directional gradient histogram characteristics of the moving target to distinguish the engineering vehicle, the non-engineering vehicle and the adhered pedestrian target;
and in the S4, the traveling speed of the vehicle is calculated according to the classification result and the traveling information, and the method specifically comprises the following steps:
s4.1, selecting a vehicle target as an object for calculating the speed according to the classification result, and not calculating the advancing speed of the pedestrian;
s4.2: in the video image, two virtual detection lines are arranged in the direction perpendicular to the vehicle target advancing direction; then measuring the distance delta D of the two virtual detection lines on the corresponding actual road; calculating the number F of frames of vehicles in the real-time video image which successively reach two virtual detection lines;
s4.3: according to the sampling frequency F of the video image, the distance delta D of the two virtual detection lines corresponding to the actual road, the frame number F of the vehicle successively arriving at the two virtual detection lines, and the advancing speed V of the vehicle are calculated:
Figure FDA0004022704830000011
2. the intersection traffic warning method according to claim 1, wherein the moving objects include vehicles and pedestrians, and the travel information includes a time period for the vehicles or the pedestrians to pass through the intersection, coordinates of the center of mass of the vehicles or the pedestrians, and a travel direction.
3. The intersection traffic early warning method according to claim 1, wherein the extracting of the moving object from the video image specifically comprises the steps of:
s2.1, establishing a mixed Gaussian model for the video image of the intersection scene, extracting a foreground target, and generating a binary moving target foreground image;
s2.2, counting the number of pixel points of each moving target and the image coordinates of the pixel points, and countingCalculating image coordinates of moving object centroid
Figure FDA0004022704830000021
/>
Figure FDA0004022704830000022
M, N respectively represents the maximum width and height of a moving object in the image, i represents the abscissa of a pixel point in the image, j represents the ordinate of a pixel point in the image, and x represents the maximum width and height of a moving object in the image ij Abscissa value, y, representing pixel points in moving object ij Expressing the longitudinal coordinate value of the moving target pixel point;
and S2.3, calculating a motion vector of the moving target according to the image coordinates of the centroid of the moving target in the adjacent 20 frames of images, determining the advancing direction of the moving target according to the motion vector, and uniformly dividing the advancing direction according to different intersections.
4. The intersection traffic early warning method according to claim 1, wherein S3.2 specifically is:
s3.2.1 collecting images of engineering vehicles, non-engineering vehicles and adhered pedestrians, unifying the sizes of the images, respectively establishing sample sets, and setting the engineering vehicle sample set as a set A, the non-engineering vehicle sample set as a set B and the adhered pedestrian sample set as a set C;
s3.2.2 takes a set A and a set B in S3.2.1 as a positive set and a set C as a negative set, respectively extracts the histogram features of the directional gradients of the positive set and the negative set as the input of a support vector machine algorithm, and generates a first classifier model, wherein the first classifier model is used for carrying out secondary classification on vehicles and pedestrians;
s3.2.3 takes a set A in S3.2.1 as a positive set and a set B as a negative set, and respectively extracts the directional gradient histogram features of the positive set and the negative set as the input of a support vector machine algorithm to generate a second classifier model, wherein the second classifier model is used for carrying out second classification on engineering vehicles and non-engineering vehicles;
s3.2.4 extracting corresponding original RGB images of the moving objects according to the extracted circumscribed rectangles of the moving objects, and extracting directional gradient histogram features of the original RGB images of the moving objects;
s3.2.5, sequentially taking the directional gradient histogram characteristics of the original RGB images of each moving target as the input of a classifier model I, if the output result is positive, considering the moving target at the moment as a vehicle, and if the output result is negative, considering the moving target at the moment as an adhered pedestrian;
s3.2.6 uses the histogram feature of the directional gradient of the moving object of S3.2.5 whose classification result is the vehicle as the input of the classifier model two, and if the output result is positive, the moving object at that time is considered as a working vehicle, and if the output result is negative, the moving object at that time is considered as a non-working vehicle.
5. The intersection traffic early warning method according to claim 1, wherein S5 is to predict a centroid coordinate of a next time position of the moving object by using a kalman filter algorithm.
6. The intersection traffic warning method according to claim 1, wherein the abnormal condition includes the following:
generating early warning signal when vehicle running speed exceeds speed value specified by construction site
Judging whether the vehicle is located in a special area of a construction site or enters the special area according to the position coordinates of the vehicle and the position coordinates of the predicted vehicle at the next moment, and generating an early warning signal, wherein the special area refers to a road section which is specified by construction site management and is forbidden to enter by a non-engineering vehicle;
and judging whether the traffic of the vehicles and the pedestrians and the vehicles and the pedestrians is a blind visual area or not according to the traveling information of the vehicles and the pedestrians so as to generate an early warning signal.
7. The intersection traffic early warning method according to claim 1, wherein the early warning information comprises displaying a monitoring image of an overspeed vehicle and sending out early warning voices, displaying a monitoring image of a non-engineering vehicle entering a special area of a construction site and sending out early warning voices, displaying a scene image when there is a possibility of collision between the vehicle and pedestrians, and between the vehicle and vehicles and sending out early warning voices and displaying a prompt slogan.
8. The intersection traffic early warning method of claim 1, characterized in that the panoramic camera captures video images of the intersection and the close-up camera captures close-up images of the vehicle.
CN201610735587.XA 2016-08-26 2016-08-26 Intersection traffic early warning method based on computer vision Active CN106373430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610735587.XA CN106373430B (en) 2016-08-26 2016-08-26 Intersection traffic early warning method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610735587.XA CN106373430B (en) 2016-08-26 2016-08-26 Intersection traffic early warning method based on computer vision

Publications (2)

Publication Number Publication Date
CN106373430A CN106373430A (en) 2017-02-01
CN106373430B true CN106373430B (en) 2023-03-31

Family

ID=57904183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610735587.XA Active CN106373430B (en) 2016-08-26 2016-08-26 Intersection traffic early warning method based on computer vision

Country Status (1)

Country Link
CN (1) CN106373430B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200552B (en) * 2017-12-14 2020-08-25 华为技术有限公司 V2X communication method and device
US10950130B2 (en) 2018-03-19 2021-03-16 Derq Inc. Early warning and collision avoidance
CN108492567A (en) * 2018-04-24 2018-09-04 汪宇明 Monitor terminal, road traffic method for early warning and system
CN110610118A (en) * 2018-06-15 2019-12-24 杭州海康威视数字技术股份有限公司 Traffic parameter acquisition method and device
CN108877269B (en) * 2018-08-20 2020-10-27 清华大学 Intersection vehicle state detection and V2X broadcasting method
CN109147326A (en) * 2018-09-06 2019-01-04 北京理工大学 A kind of Campus transport safety warning system
CN109191852B (en) * 2018-10-25 2021-07-06 西北工业大学 Vehicle-road-cloud cooperative traffic flow situation prediction method
CN111260928B (en) * 2018-11-30 2021-07-20 浙江宇视科技有限公司 Method and device for detecting pedestrian without giving way to vehicle
CN109830123B (en) * 2019-03-22 2022-01-14 大陆投资(中国)有限公司 Crossing collision early warning method and system
CN112216097A (en) * 2019-07-09 2021-01-12 华为技术有限公司 Method and device for detecting blind area of vehicle
CN110443161B (en) * 2019-07-19 2023-08-29 宁波工程学院 Monitoring method based on artificial intelligence in banking scene
CN114586082A (en) 2019-08-29 2022-06-03 德尔克股份有限公司 Enhanced on-board equipment
CN111462501B (en) * 2020-05-21 2021-08-17 山东师范大学 Super-view area passing system based on 5G network and implementation method thereof
CN113793514A (en) * 2021-08-30 2021-12-14 中冶南方城市建设工程技术有限公司 Traffic safety warning system for entrances and exits of surrounding plots of construction roads
CN115240471B (en) * 2022-08-09 2024-03-01 东揽(南京)智能科技有限公司 Intelligent factory collision avoidance early warning method and system based on image acquisition
CN115273479B (en) * 2022-09-19 2023-03-31 深圳市博科思智能股份有限公司 Operation and maintenance management method, device and equipment based on image processing and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590217A (en) * 1991-04-08 1996-12-31 Matsushita Electric Industrial Co., Ltd. Vehicle activity measuring apparatus
CN101179710A (en) * 2007-11-30 2008-05-14 浙江工业大学 Intelligent video monitoring apparatus of railway crossing
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN103345840A (en) * 2013-05-28 2013-10-09 南京正保通信网络技术有限公司 Video detection method of road crossing event at cross road
CN103971521A (en) * 2014-05-19 2014-08-06 清华大学 Method and device for detecting road traffic abnormal events in real time
CN104050684A (en) * 2014-05-27 2014-09-17 华中科技大学 Video moving object classification method and system based on on-line training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590217A (en) * 1991-04-08 1996-12-31 Matsushita Electric Industrial Co., Ltd. Vehicle activity measuring apparatus
CN101179710A (en) * 2007-11-30 2008-05-14 浙江工业大学 Intelligent video monitoring apparatus of railway crossing
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN103345840A (en) * 2013-05-28 2013-10-09 南京正保通信网络技术有限公司 Video detection method of road crossing event at cross road
CN103971521A (en) * 2014-05-19 2014-08-06 清华大学 Method and device for detecting road traffic abnormal events in real time
CN104050684A (en) * 2014-05-27 2014-09-17 华中科技大学 Video moving object classification method and system based on on-line training

Also Published As

Publication number Publication date
CN106373430A (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN106373430B (en) Intersection traffic early warning method based on computer vision
US8284996B2 (en) Multiple object speed tracking system
CN105825696B (en) Drive assist system based on signal information prompting
CN105825185B (en) Vehicle collision avoidance method for early warning and device
Yu et al. Traffic light detection during day and night conditions by a camera
CN111800507A (en) Traffic monitoring method and traffic monitoring system
CN111833598B (en) Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway
CN103279756A (en) Vehicle detecting analysis system and detecting analysis method thereof based on integrated classifier
CN110619279A (en) Road traffic sign instance segmentation method based on tracking
CN113593250A (en) Illegal parking detection system based on visual identification
CN110929676A (en) Deep learning-based real-time detection method for illegal turning around
CN114898296A (en) Bus lane occupation detection method based on millimeter wave radar and vision fusion
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
KR102200204B1 (en) 3-D Image Analyzing System Using CCTV Image
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
Lin et al. Airborne moving vehicle detection for urban traffic surveillance
CN110210324B (en) Road target rapid detection early warning method and system
CN103942541A (en) Electric vehicle automatic detection method based on vehicle-mounted vision within blind zone
Ardestani et al. Signal timing detection based on spatial–temporal map generated from CCTV surveillance video
Abdagic et al. Counting traffic using optical flow algorithm on video footage of a complex crossroad
Hermawati et al. A real-time license plate detection system for parking access
US20230126957A1 (en) Systems and methods for determining fault for a vehicle accident
KR102492290B1 (en) Drone image analysis system based on deep learning for traffic measurement
Lin et al. Adaptive speed bump with vehicle identification for intelligent traffic flow control
Yu et al. A traffic light detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant