CN113140110B - Intelligent traffic control method, lighting device and monitoring device - Google Patents
Intelligent traffic control method, lighting device and monitoring device Download PDFInfo
- Publication number
- CN113140110B CN113140110B CN202110450307.1A CN202110450307A CN113140110B CN 113140110 B CN113140110 B CN 113140110B CN 202110450307 A CN202110450307 A CN 202110450307A CN 113140110 B CN113140110 B CN 113140110B
- Authority
- CN
- China
- Prior art keywords
- traffic
- image
- pixel
- current
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012806 monitoring device Methods 0.000 title abstract description 13
- 230000033001 locomotion Effects 0.000 claims abstract description 79
- 239000013598 vector Substances 0.000 claims abstract description 62
- 238000001514 detection method Methods 0.000 claims abstract description 56
- 230000004927 fusion Effects 0.000 claims abstract description 6
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000012544 monitoring process Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 238000007789 sealing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 239000013078 crystal Substances 0.000 claims description 2
- 230000001360 synchronised effect Effects 0.000 claims description 2
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 11
- 229910052782 aluminium Inorganic materials 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 239000011324 bead Substances 0.000 description 6
- 239000004411 aluminium Substances 0.000 description 5
- 239000003086 colorant Substances 0.000 description 5
- 239000010410 layer Substances 0.000 description 4
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004020 luminiscence type Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 239000002344 surface layer Substances 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000011031 large-scale manufacturing process Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an intelligent traffic control method, a light emitting device and a monitoring device, wherein the method comprises the following steps: acquiring a vehicle image and a pedestrian image in real-time traffic information through a target detection algorithm; obtaining a road image, and space track ratio parameters of the pixel points of the pedestrian image and the vehicle image and the pixel points of the road image through multi-frame fusion; counting the traffic flow and the people flow of the road in unit time to obtain traffic flow parameters; calculating the global vector motion speed by combining vector motion with angular point detection; identifying the traffic state parameters of the current traffic by using a clustering discrimination model; and carrying out traffic state indication; the invention also provides a light-emitting device and a monitoring device for executing the method. The traffic diversion method can effectively guide the traffic group in advance to conduct traffic diversion, and improves the traffic capacity and the traffic efficiency of roads.
Description
The application is a divisional application of an invention patent application with application number 202010367593.0, which is filed on 4 months and 30 days in 2020 and has the name of an intelligent traffic indication lighting device, a monitoring device, a system and a method.
Technical Field
The invention relates to the field of intelligent traffic, in particular to an intelligent traffic control method, a light-emitting device and a monitoring device.
Background
In recent years, the economy of China is rapidly developed, the town scale is continuously enlarged, the road mileage and the traffic network are continuously enlarged, the number of vehicles is rapidly increased, and the living of people is greatly facilitated. The rapid increase of traffic flow is caused when urban traffic rapidly develops, and traffic jams in rush hours and holidays become a common phenomenon. To solve this problem, traffic flow needs to be pre-warned, managed and even induced. The basis for achieving the targets is to judge the road traffic jam condition.
For urban traffic, the quality of the urban traffic is basically dependent on whether the intersection can effectively run, and because the intersection is the most important collecting and distributing point in the whole urban traffic and is an important reason and link for traffic congestion of a plurality of cities, it is important how to take more effective measures and methods to reasonably analyze and control the intersection, and if the problem can be effectively solved, the problem of traffic congestion can be solved to a great extent.
In addition, in order to improve the current road conditions and enhance the monitoring and management of road traffic, road monitoring cameras have been installed in various cities on a large scale. The traffic management department and citizens can effectively and intuitively obtain the real-time video of the current road in time. The acquired real-time video contains a large amount of traffic information, which brings great convenience to the judgment of the current road traffic jam condition. Meanwhile, the problems of inaccurate acquired data and loss of the road monitoring video system inevitably exist, which are mainly caused by system faults or insufficient detection precision, so that the road monitoring video system has great practical significance in repairing lost traffic flow.
Disclosure of Invention
The invention aims to solve the problems and provide an intelligent traffic control method, a light-emitting device and a monitoring device; the intelligent traffic control method can accurately and rapidly analyze the traffic state of the current road, the intelligent traffic indication lighting device can display the congestion state of each intersection in real time, the intelligent traffic indication system is convenient to install, automatic intelligent judgment is realized, the inaccuracy of the current road monitoring system is greatly improved by using the road congestion judging method, and effective congestion state assessment is provided.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the invention provides an intelligent traffic control method, which comprises the following steps:
s1: acquiring real-time traffic information;
s2: according to the traffic information, a vehicle image and a pedestrian image in the traffic information are obtained through a target detection algorithm; obtaining a road image through multi-frame fusion, and obtaining a space track ratio parameter through the ratio of the pixel points of the pedestrian image and the vehicle image to the pixel points of the road image;
s3: according to the space duty ratio parameter and the target detection algorithm, combining with a virtual detection coil, counting the road traffic flow and the people flow in unit time to obtain a traffic flow parameter;
s4: the key feature points of the moving vehicles and pedestrians are obtained according to the traffic flow parameters through combination of vector movement and angular point detection, and the global vector movement speed is obtained;
s5: establishing a clustering discrimination model through the global vector movement speed to obtain traffic state parameters;
s6: obtaining a road section indication signal comprising traffic groups and traffic states of the traffic groups according to the traffic state parameters;
the method comprises the steps that a target detection algorithm is used for detecting a current vehicle image or a pedestrian image moving in a traffic video sequence, the space duty ratio parameter is the ratio of the width of a vehicle or the width of a pedestrian to the width of a road where the pedestrian is located, a virtual detection coil is arranged in a road monitoring video system, the virtual detection coil is perpendicular to a lane and close to a camera, the vehicle image passing through the detection coil is counted through the target detection algorithm, the global vector moving speed is the moving average speed of a target in the horizontal and vertical directions represented by a target vector, the clustering discrimination model is used for outputting the current traffic state, and the traffic state at least comprises one of the following steps: congestion, clear or slow.
Preferably, said step S2 comprises the following sub-steps:
s21: converting the vehicle image and the pedestrian image into multi-frame binary images through the target detection algorithm, fusing the binary images and the motion trail, and carrying out denoising, filling and opening and closing operations; obtaining a complete road image;
s22: by the formula:and obtaining the space track ratio parameter, wherein c is the space track ratio parameter, N is the number of video frames in unit time T, A_vehicle is the number of pixels of a vehicle image or a pedestrian image, A_road is the number of pixels of a lane image, and i is the ith frame.
Preferably, the target detection algorithm in S21 includes:
211, background modeling from the 1 st frame image of the video sequence,
creating a background sample set P { x } containing M gray values for each pixel point of the image:
P{x}={x 1 ,x 2 ,x 3 ...x M }
wherein x is i As the background gray value of the ith pixel point, any one background gray value x in the background sample set P { x } corresponding to each pixel point i The gray values of the current pixel point and the gray values of 8 pixel points in the neighborhood are randomly generated, the random generation process is circularly carried out for M times, and the initialization process of the background sample set corresponding to the current pixel point is completed;
S212: the detection method has the advantages of good detection prospect,
according to the image of the ith frame (i > 1) of the video sequence, measuring the similarity between the current pixel point and the corresponding background sample set P { x }, and defining the similarity as x i Sphere space S with circle center and radius R R (x) Sphere space S R (x) Number of intersecting samples C with background sample set P { x } # :
C # =S R (x)∩P{x};
S213: presetting an intersection threshold C #min ,C # >C #min When the current pixel point is judged to be a background point, otherwise, the current pixel point is judged to be a foreground point;
s214: calculating an optimal segmentation threshold of the ith frame image;
s215: the secondary discrimination is carried out,
randomly selecting K from pixel points of the current image random Calculating K random Gray scale average value of each pixel point
s216: performing OR operation on the background pixel points determined in the step 3 and the step 5 to obtain an accurate foreground target image;
s217: step 6, binarizing the foreground target image obtained in the step;
s218: and filling the cavity of the binarized foreground target image to obtain a foreground image.
Preferably, the S214 includes:
s2141: assuming that the gray level of the current video image is L, the corresponding gray range is [0, L-1 ] ]The number of pixel points of the whole video frame is K, and the number of pixel points with gray level i is K i Then
Thereby obtaining the probability P that the gray level of a pixel point is i i ,
Foreground region probability omega 0 The method comprises the following steps:the gray average value of the foreground area is mu 0 :
Background region probability ω 1 The method comprises the following steps:the gray average value of the foreground area is mu 1 :
Wherein L is 0 For the segmentation threshold of foreground and background, the gray level of foreground region is [0, L 0 ]The gray level of the background area is [ L ] 0 +1,L-1],ω 0 +ω 1 =1;
S2143: calculating the variance between foreground region and background region as sigma 2 :
σ 2 =ω 0 ω 1 (μ 0 -μ 1 ) 2 ,
Calculated inter-class variance sigma 2 The larger the value of (c) is, the larger the difference between the two areas is, and the better the foreground and the background can be distinguished, the maximum value is only needed to be obtained for achieving the optimal segmentation effect, and the corresponding gray value is the optimal threshold value.
S2144: determining an optimal segmentation thresholdL 0 At [0, L-1 ]]Traversing, when sigma 2 At the maximum value, L at this time 0 For the best segmentation threshold->
Preferably, the step S218 includes:
s2181: establishing an integer marking matrix D corresponding to all pixel points in a foreground target image, initializing all elements to 0, and establishing a linear sequence G which is all 0 and is used for storing seed points and points in a communication domain;
S2182: scanning the pixels of the binarized whole frame image line by line, searching the first pixel with the gray value of 255 appearing in the whole frame image, and taking the first pixel as the initial pixel S of the moving target area to be processed;
s2183: the initial pixel point S obtained by previous scanning is used as a growing seed, region growth is carried out to complete the searching process of a connected region, wherein the initial pixel point S cannot be the edge of a detection target, otherwise, the initial pixel point S is replaced by a pixel point which is not at the edge in eight adjacent regions, the initial pixel point S is stored in a linear sequence G, and the value of the corresponding position of the initial pixel point S in an integer marking matrix D is reset to be 1;
s2184: the value of each pixel point of the linear sequence G is comprehensively scanned, if data with the value of 0 exists in eight adjacent areas of the pixel points of the linear sequence G, the corresponding position in the integer marking matrix D is modified to be 2, and the peripheral outline of the current area is determined;
s2185: searching for the j eighth neighborhood pixel point with the mark value of 2 corresponding to the pixel point SOn the peripheral contour of the target area, pixels are used +.>Updating the linear sequence G, clearing other values to pixel +.>As seeds, region growth is carried out, wherein the growth rule is as follows: pixel point is taken out of the linear sequence G >Scanning the corresponding pixel points S in four adjacent areas i (i=1, 2,3, 4), and searching the gray value L of the corresponding eight neighborhood pixel points 8 Representing pixel point S i The value of the corresponding position in the integer marking matrix D.
Preferably, the S3 includes:
s31: setting a virtual detection coil perpendicular to a road in a road monitoring system, and counting vehicle images and/or pedestrian images passing through the virtual detection coil by utilizing the target detection algorithm;
s32: data initialization, determining a unit time T, obtaining a video frame number N=T×f, wherein f represents a video frame rate, the initial value of the number of vehicles and/or the number of pedestrians N_vehicle is 0, and whether the ith frame of vehicles and/or pedestrians has a judgment result J_vehicle or not is obtained i The initial value is 0, i=0;
s33: calculating the judging result J_vehicle of the ith frame of vehicle and/or pedestrian in the current virtual detection coil i The method comprises the following steps:
wherein A_refresh i Updating the pixel point number for the detection coil area of the ith frame, wherein A_threshold is the threshold value of the updated pixel point;
s34: if J_level of the i-th frame i If=0, the number is not counted, n_cycle=n_cycle, and the process proceeds to step S37;
s35: if J_level of the i-th frame i =1, and j_vecicle of the i-1 th frame i-1 If =0, count, n_cycle=n_cycle+1, go to step S37;
S36: if J_level of the i-th frame i =1, and j_vecicle of the i-1 th frame i-1 If=1, the number is not counted, n_cycle=n_cycle+1, and the process proceeds to step S37;
s37: if i > N, ending the detection count, outputting the number of vehicles and/or the number of pedestrians N_vehicle, and performing step S38; otherwise, i=i+1, returning to step S33;
preferably, the S4 includes:
s41: the motion characteristic points are obtained specifically as follows:
s411: selecting pixel points at (x, y), and calculating x and y direction movement speeds at (x, y) as follows:
wherein u is i And u i-1 The x-direction movement speeds of the ith frame and the i-1 th frame are respectively v i And v i-1 The motion speeds in the y direction of the ith frame and the ith-1 frame are respectively I x I is the change rate of the gray scale of the image along the x direction y I is the change rate of the gray scale of the image along with the y direction t The change rate of the gray scale of the image along with time t is shown, and lambda is Lagrangian constant;
s412: if it isAnd i is less than or equal to N_iteration, i=i+1, returning to step S411 to continue iterating the current pixel point, wherein G_threshold is a difference threshold, and N_iteration is an iteration number threshold;
s413: if it isAnd i is less than or equal to N_iteration, selecting the current (x, y) as a motion characteristic point, ending iteration, returning to the step S411, selecting other pixel points for calculation, and judging whether the current (x, y) is the motion characteristic point;
S414: if i > n_iteration, the current (x, y) is not the motion feature point, the iteration is ended, i=0, the step S411 is returned, other pixel points are selected for calculation, and whether the motion feature point is determined;
s415: repeating the steps S411 to S414 until all the motion feature points are acquired;
s42: detecting local extreme points by adopting angular point detection;
s421: processing pixel points in the image, calculating horizontal and vertical gradients, and calculating the product of the horizontal and vertical gradients;
s422: filtering the image by adopting a Gaussian filter and smoothing noise interference;
s423: calculating an interest value for each pixel point in the image;
s424: repeating the steps S421 to S423 until all the local extreme points are obtained;
s43: according to the motion feature points and the local extreme points, overlapping pixel points are obtained to form key feature points (x key ,y key );
S44: the global vector motion velocity is calculated and,
s441: the direction of motion of the principal vector is determined,
s442: according to the key feature points (x key ,y key ) And the main vector motion direction, and obtaining a feature point (x 'of the main vector motion direction' key ,y' key );
S443: the global vector motion speed e is calculated and,
wherein e is the global vector motion speed, And->The average value of vector motion speeds in the horizontal direction and the vertical direction respectively, N_key is the total number of feature points in the motion direction of the main vector, j is the number of feature points in the motion direction of the main vector, and u j (x' key ,y' key ) And v j (x' key ,y' key ) Is (x' key ,y' key ) At x and y direction movement speeds.
Preferably, the S5 includes:
s51: according to the space duty ratio parameter c, the traffic flow parameter d and the global vector movement speed e form a current traffic characteristic vector V traffice_current =[c,d,e] T And historical traffic feature vector V traffice_history =[c,d,e] T ;
S52: at the current traffic feature vector V traffice_current If c is more than 0.8, judging that the traffic is jammed, and ending the judgment; if c is less than 0.1, judging that the flow is smooth, and ending the judgment; otherwise, step S53 is entered;
s53: by matching historical traffic characteristic vectors V traffice_history Clustering is carried out to obtain a discrimination center V in three traffic states of smoothness, slowness and congestion traffice_smooth ,V traffice_normal V (V) traffice_jam ;
S54: calculate V traffice_current And V is equal to traffice_smooth ,V traffice_normal V (V) traffice_jam Is respectively D current_euler_smooth ,D current_euler_normal ,D current_euler_jam ;
S55: calculate V traffice_history And V is equal to traffice_smooth ,V traffice_normal V (V) traffice_jam Is respectively D history_euler_smooth ,D history_euler_normal ,D history_euler_jam ;
S56: if D current_euler_smooth <D history_euler_smooth Judging that the flow is smooth, and ending the judgment; otherwise, step S57 is entered;
s57: if D current_euler_normal <D history_euler_normal Judging that the speed is slow, and ending the judgment; otherwise, go to step S58;
s58: if D current_euler_jam <D history_euler_jam Judging that the speed is slow, and ending the judgment; otherwise, returning to the step 5.1 to acquire the current traffic characteristic vector V from the new traffice_current 。
The beneficial effects are that:
1. the intelligent traffic indication lighting device receives traffic jam information transmitted from the video monitoring detection system, can display three colors of red, green and yellow, represents jam, smooth and slow, and provides clear traffic road conditions for drivers in real time;
2. the luminous signpost is assembled by adopting the detachable units, is processed by adopting the aluminum profile, has simple structure, reasonable strength, greatly reduced weight and uniform size and specification, and is convenient for large-scale production; the luminous light source of the intelligent traffic indication luminous device adopts a special design, the main and auxiliary light sources are configured, the service life of the product is ensured to be more than 5 years, the phenomena of non-luminescence and non-uniform luminescence caused by circuit and light source damage in the market are avoided, all control units fall to the ground, and after the luminous sign fails, all problems can be completely solved on the ground without disassembly and maintenance;
3. the controller of the intelligent traffic indication lighting device adjusts current through a background control chip to realize automatic adjustment of the brightness of the lighting mark, and can automatically detect the brightness through setting working time in the background to realize an energy-saving function; the light-emitting part is stuck with a light-transmitting film, and can reflect light passively when power failure occurs on the premise of ensuring brightness; on the premise of ensuring the brightness, the unit power is greatly reduced;
4. The road congestion judging method adopted by the intelligent traffic indicating system effectively judges the traffic congestion condition of the lane and realizes intelligent congestion judgment;
5. the road congestion judging method adopts a target detection algorithm to realize the accurate segmentation of the foreground image by a secondary superposition method, and provides accurate basis for the acquisition of the relevant parameters of the moving vehicle;
6. the road congestion judging method adopts a vehicle track fusion algorithm and a target detection algorithm to calculate an accurate space track ratio, adopts a virtual detection coil and a target detection algorithm to rapidly acquire traffic flow parameters, calculates a global vector movement speed by combining a vector movement method with corner detection, and reflects the overall speed of traffic flow;
7. the space occupation ratio direct judgment is combined with the cluster center judgment, so that quick and accurate congestion condition assessment is realized.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a general assembly structure diagram of an intelligent traffic indication lighting apparatus of the present invention;
FIG. 2 is a block diagram of a cell plate of the present invention;
FIG. 3 is a schematic illustration of a landmark indication area in accordance with the present invention;
FIG. 4 is a block diagram of an installation card slot of the present invention;
FIG. 5 is a schematic illustration of an intermediate traffic indication zone of the present invention;
FIG. 6 is a block diagram of an inner seal plate of the present invention;
FIG. 7 is a block diagram of an LED light bar of the present invention;
fig. 8 is a structural view of a light guide plate of the present invention;
FIG. 9 is a schematic diagram of a display of an intelligent traffic indication lighting apparatus of the present invention;
FIG. 10 is a schematic diagram of the controller composition of the present invention;
FIG. 11 is a flow chart of a road congestion determination method of the present invention;
FIG. 12 is a flow chart of the object detection algorithm of the present invention;
part numbers in the drawings:
1. an outer sealing plate; 2. a controller; 3. a unit plate; 31. LED light bar; 311. a main light source lamp bead; 312. auxiliary light source lamp beads; 314. edge pressing grooves; 32. conducting wires of the LED lamp strips; 33. installing a clamping groove; 331. a screw hole; 34. a lamp strip clamping groove; 35. an inner sealing plate; 351. a threading hole; 36. a pressurized back plate; 37. a light guide plate; 372. a waterproof board; 373. a light-transmitting film; 374. a light shielding film; 375. a reflective film; 4. an aluminum plate; 5. aluminum horns; 6. a light guide plate; 7. a traffic indication area; 71. red; 72. yellow; 73. green; 8. a population indication area.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, based on the examples herein, which are within the scope of the invention as defined by the claims, will be within the scope of the invention as defined by the claims.
Example 1
The utility model provides an intelligent transportation indicates lighting device, as shown in fig. 1 through 9, including controller 2, outer shrouding 1, angle aluminium 5, aluminum plate 4 and a plurality of unit board 3, assemble through the bolt fastening between the unit board 3, angle aluminium 5 is assembled through the bolt fastening with unit board 3, angle aluminium 5 external fixation has outer shrouding 1, outer shrouding 1 encapsulates controller 2, angle aluminium 5 and a plurality of unit board 3 inside, aluminum plate 4 has covered the junction between angle aluminium 5 and the outer shrouding 1.
The controller 2 controls the light emission of the unit boards 3, and a plurality of unit boards 3 constitute a light emitting surface;
the unit plate 3 includes: a plurality of first cell plates for road sign display, a plurality of second cell plates for road sign indication, a plurality of third cell plates for displaying traffic groups, and a fourth cell plate disposed between the first cell plates, the second cell plates, and the third cell plates;
The first unit board is internally provided with a monochromatic light source, the second unit board is internally provided with a polychromatic light source for displaying different traffic flow, the third unit board is provided with a polychromatic light source for displaying different traffic groups and different traffic group flow, and the fourth unit board is not provided with a light source and/or is provided with a monochromatic light source different from the light sources of the first unit board, the second unit board and the third unit board.
Each unit board comprises a light guide plate 37, a pressurizing back plate 36, an inner sealing plate 35, LED lamp strips 31 and mounting clamping grooves 33, wherein the inner sealing plate 35 is arranged on a middle clamping buckle between the two mounting clamping grooves 33, top openings of the two mounting clamping grooves 33 are covered by the light guide plate 37, pressurizing back plates 36 are arranged between edge pressing grooves 314 on two sides of the light guide plate 37 and the packaging plate 35, the LED lamp strips 31 are arranged in the lamp strip clamping grooves 34 of the light guide plate 37, the LED lamp strips 31 are oppositely arranged in the light guide plate 37, the irradiation directions of the LED lamp strips 31 mutually intersect and emit light outwards through the light guide plate 37, assembling screw holes 331 are formed in the lower portion of the mounting clamping grooves 33, bolts penetrate through the assembling screw holes 331 to realize fixed assembling between the unit boards 3 and between the corner aluminum 5 and the unit boards 3, threading holes 351 are formed in the bottom of the inner sealing plate 35, and the LED lamp strips are connected with the controller 3 through the threading holes 351.
The brightness of the LED light bar 31 of the light emitting device can be adjusted by the mobile phone, and the LED light bar 31 automatically adjusts the brightness according to the ambient light brightness or time.
The LED light bar 31 is composed of a main light source light bead 311 and an auxiliary light source light bead 312, the main light source light bead 311 and the auxiliary light source light bead 312 adopt a single-color light source and a three-color light source, specifically, the LED light bar of the first unit board is a single-color light source, the LED light bars of the second unit board and the third unit board are three-color light sources and are different from the single-color light source of the first unit board, preferably red, yellow and green, the light source of the second unit board is the same as the light source of the third unit board, the LED light bar of the fourth unit board is a single-color light source and/or no light source, and the lighting mode of the light source of the third unit board is the same as that of the first unit board, the second unit board and the third unit board, for example: children display the patterns of the children, and display different colors according to the density degree of the children, if the patterns of the children are crowded, the patterns of the children are red, more yellow and less green, and the display modes of vehicles, adults, old people and the like are the same.
The light guide plate comprises a traffic indication area, a landmark indication area, a non-display area and a group indication area, wherein three-color light sources for displaying three colors are correspondingly arranged in the middle traffic indication area, the landmark indication area adopts a single-color light source, no light source exists in the non-display area, the middle traffic indication area and the landmark indication area comprise a bottom layer formed by a surface layer formed by a light-transmitting film and a film waterproof layer and an LED (light-emitting diode) light bar, the non-display area comprises a bottom layer formed by a surface layer formed by a light-reflecting film and a bottom layer formed by a light-shielding film, the controller receives traffic jam information transmitted by the intelligent video monitoring device, a corresponding control signal is generated, and arrows in the corresponding directions (including the front, the left and the right) of the middle traffic indication area are displayed as three colors of red 71, green 73 and yellow 72, and represent jam, smoothness and slowness respectively.
Example 2
The invention also provides a traffic intelligent indication monitoring device, as shown in fig. 10, the controller comprises a processor (DSP), a clock unit, a programmable logic device, a liquid crystal display, a keyboard, a memory and a driving circuit of the LED light bar 31.
The core processor DSP controls and manages the operation of the whole controller, reads the congestion signals of the intelligent video monitoring device and the serial numbers of the camera modules in all directions through the programmable logic device, and controls the middle traffic indication area to display corresponding colors according to the corresponding LED lamp strips 31 in the corresponding directions so as to indicate the congestion conditions in all directions; the clock unit is internally provided with a crystal oscillator and a battery port and provides synchronous clocks for all units of the controller; the programmable logic device is based on an SRAM real-time programming technology, and a look-up table is formed by utilizing the SRAM to realize a large-scale integrated programmable logic device with a digital logic function, the congestion signal of the intelligent video monitoring device is received, and the time sequence conversion and the chip selection decoding are carried out, wherein the time sequence conversion comprises: generating an SPI bus interface of the LED light bar 31 driving circuit; generating a memory read-write time sequence; the chip selection decoding mainly provides chip selection addresses for a memory, a clock unit and a driving circuit of the LED light bar 31; the keyboard and the liquid crystal display are used as man-machine interfaces, and the liquid crystal display adopts a 240×128 lattice liquid crystal screen because of the large timing information quantity of the controller, and the core processor DSP accesses in an IO port mode: the keyboard circuit adopts a 25-key touch keyboard, and the core processor DSP reads key values in a scanning mode; the memory adopts a nonvolatile memory for storing various parameters of the controller, and the read-write time sequence of the memory is completed by a programmable logic device matched with a core processor DSP; the serial shift chip with the latch function of the LED light bar 31 driving circuit is used as a driving chip, the programmable logic device is communicated with the serial shift chip in an SPI bus mode, the LED light bar 31 driving circuit firstly conducts strong and weak electric isolation on output signals of the programmable logic device through an optocoupler, and then drives the silicon controlled rectifier after the output signals are amplified through the three-stage tube, so that the LED light bar 31 is controlled.
The hardware structure avoids the problem of reset of the controller caused by mains supply interference, and ensures that the annunciator works more stably and reliably.
Example 3
The invention also provides an intelligent traffic indication system, which comprises the intelligent traffic indication lighting device and the intelligent traffic indication monitoring device; the monitoring device comprises an image acquisition module, an image processing module, an image storage module and a control module of the light-emitting device;
the image acquisition module acquires traffic information of the light-emitting device area through the image shooting device, converts the acquired traffic information into a traffic video sequence and sends the traffic video sequence to the image processing module; the image processing module judges the traffic state of the road section according to the traffic information to obtain road section traffic information, and comprises a processor chip, a watchdog module, a power supply module, a memory module and a clock circuit module; the image storage module is used for storing the original image and the data processed by the image processing module; the control module sends the road section indication information to a controller of the light emitting device.
The image acquisition part acquires traffic images of a current lane by using the camera module, converts the acquired analog images into traffic video sequences and sends the traffic video sequences to the image processing part, the image processing part judges the congestion state of the road section by using a road congestion judging method on the digital images, the image processing part consists of a processor chip, a watchdog module, a power supply module, a memory module and a clock circuit module, the image storage part stores the original images and the data results after the image processing, ensures the data safety and is convenient for data viewing, and the control part of the light-emitting device generates corresponding congestion signals and the serial numbers of the camera module for generating the congestion signals according to the congestion state obtained by the image processing part and starts the serial numbers of the camera module for the intelligent traffic indication light-emitting device.
Example 4
The invention also provides an intelligent traffic indication method, as shown in fig. 11 and 12, comprising the following steps:
s1, acquiring real-time traffic information;
specifically, a real-time traffic video sequence is obtained;
s2, obtaining a vehicle image and a pedestrian image in the traffic information through a target detection algorithm according to the traffic information; obtaining a road image through multi-frame fusion, and obtaining a space track ratio parameter through the ratio of the pixel points of the pedestrian image and the vehicle image to the pixel points of the road image;
S3, counting the road traffic flow and the people flow in unit time according to the space duty ratio parameter and the target detection algorithm by combining with a virtual detection coil to obtain a traffic flow parameter;
s4, obtaining key feature points of moving vehicles and pedestrians according to the traffic flow parameters by combining vector movement with angular point detection to obtain global vector movement speed;
s5, establishing a clustering discrimination model through the global vector movement speed to obtain traffic state parameters (namely discriminating traffic jam conditions based on the clustering discrimination model);
and S6, obtaining road section indication signals (namely outputting a judging result, namely unblocked, slow or blocked respectively) comprising traffic groups and traffic states of the traffic groups according to the traffic state parameters.
Preferably, said step S2 comprises the following sub-steps:
s21: the method comprises the steps that a target detection algorithm detects a current vehicle image moving in a traffic video sequence, lane detection is carried out according to a moving vehicle track fusion method, a vehicle image in a multi-frame traffic video sequence is detected through the target detection algorithm to generate a multi-frame moving vehicle binary image, the multi-frame moving vehicle binary image is subjected to OR operation to complete track fusion, and denoising, filling, denoising, hole filling and opening and closing operation are carried out to obtain a complete lane image;
S22: by the formula:obtaining the space track ratio parameter, wherein c is the space track ratio parameter, N is the number of video frames in unit time T, A_vehicle is the number of pixels of a vehicle image or a pedestrian image, A_road is the number of pixels of a lane image, and iIs the i-th frame.
Wherein N is 40 frames;
in an actual road monitoring system, as a camera is prone to shoot, the distance of a road is narrower and the distance of the road is wider, the lane can be completely detected by the method, the side lane can be more accurate, the ratio of the width of a vehicle to the width of the road at the position of the actual road can be truly and accurately described, and the effect of traffic parameters is stable.
Preferably, said step S3 comprises the following sub-steps:
s31: providing a virtual detection coil perpendicular to a road in a road monitoring system, and counting vehicle images and/or pedestrian images passing through the virtual detection coil by using the target detection algorithm
Specifically, a virtual detection coil is arranged in a road monitoring video system, the virtual detection coil is perpendicular to a lane and is close to a camera, and vehicle images passing through the detection coil are counted through a target detection algorithm;
s32: data initialization, determining a unit time T, obtaining a video frame number N=T×f, wherein f represents a video frame rate, the initial value of the number of vehicles and/or the number of pedestrians N_vehicle is 0, and whether the ith frame of vehicles and/or pedestrians has a judgment result J_vehicle or not is obtained i The initial value is 0, i=0;
s33: calculating the judging result J_vehicle of the ith frame of vehicle and/or pedestrian in the current virtual detection coil i The method comprises the following steps:
wherein A_refresh i Updating the pixel point number for the detection coil area of the ith frame, wherein A_threshold is the threshold value of the updated pixel point;
s34: if J_level of the i-th frame i If=0, the number is not counted, n_cycle=n_cycle, and the process proceeds to step S37;
s35: if J_level of the i-th frame i =1, and j_vecicle of the i-1 th frame i-1 If 0, count, n_veccle=n_veccle +1, proceeding to step S37;
s36: if J_level of the i-th frame i =1, and j_vecicle of the i-1 th frame i-1 If=1, the number is not counted, n_cycle=n_cycle+1, and the process proceeds to step S37;
s37: if i > N, ending the detection count, outputting the number of vehicles and/or the number of pedestrians N_vehicle, and performing step S38; otherwise, i=i+1, returning to step S33;
preferably, said step S4 comprises the following sub-steps:
s41: the motion characteristic points are obtained specifically as follows:
s411: selecting pixel points at (x, y), and calculating x and y direction movement speeds at (x, y) as follows:
wherein u is i And u i-1 The x-direction movement speeds of the ith frame and the i-1 th frame are respectively v i And v i-1 The motion speeds in the y direction of the ith frame and the ith-1 frame are respectively I x I is the change rate of the gray scale of the image along the x direction y I is the change rate of the gray scale of the image along with the y direction t The change rate of the gray scale of the image along with time t is shown, and lambda is Lagrangian constant;
s412: if it isAnd i is less than or equal to N_iteration, i=i+1, returning to step S411 to continue iterating the current pixel point, wherein G_threshold is a difference threshold, and N_iteration is an iteration number threshold;
s413: if it isAnd i is less than or equal to N_item, selecting the current (x, y) as a motion characteristic pointEnding the iteration, i=0, returning to the step S411, selecting other pixel points for calculation, and judging whether the pixel points are motion feature points;
s414: if i > n_iteration, the current (x, y) is not the motion feature point, the iteration is ended, i=0, the step S411 is returned, other pixel points are selected for calculation, and whether the motion feature point is determined;
s415: repeating the steps S411 to S414 until all the motion feature points are acquired;
s42: detecting local extreme points by adopting angular point detection;
s421: processing pixel points in the image, calculating horizontal and vertical gradients, and calculating the product of the horizontal and vertical gradients;
s422: filtering the image by adopting a Gaussian filter and smoothing noise interference;
S423: calculating an interest value for each pixel point in the image;
s424: repeating the steps S421 to S423 until all the local extreme points are obtained;
s43: according to the motion feature points and the local extreme points, overlapping pixel points are obtained to form key feature points (x key ,y key );
S44: the global vector motion velocity is calculated and,
s441: the direction of motion of the principal vector is determined,
s442: according to the key feature points (x key ,y key ) And the main vector motion direction, and obtaining a feature point (x 'of the main vector motion direction' key ,y' key );
S443: the global vector motion speed e is calculated and,
wherein e is the global vector motion speed,and->The average value of vector motion speeds in the horizontal direction and the vertical direction respectively, N_key is the total number of feature points in the motion direction of the main vector, j is the number of feature points in the motion direction of the main vector, and u j (x' key ,y' key ) And v j (x' key ,y' key ) Is (x' key ,y' key ) At x and y direction movement speeds.
Preferably, said step S5 comprises the following sub-steps:
s51: according to the space duty ratio parameter c, the traffic flow parameter d and the global vector movement speed e form a current traffic characteristic vector V traffice_current =[c,d,e] T And historical traffic feature vector V traffice_history =[c,d,e] T ;
S52: at the current traffic feature vector V traffice_current If c is more than 0.8, judging that the traffic is jammed, and ending the judgment; if c is less than 0.1, judging that the flow is smooth, and ending the judgment; otherwise, step S53 is entered;
S53: by matching historical traffic characteristic vectors V traffice_history Clustering is carried out to obtain a discrimination center V in three traffic states of smoothness, slowness and congestion traffice_smooth ,V traffice_normal V (V) traffice_jam ;
S54: calculate V traffice_current And V is equal to traffice_smooth ,V traffice_normal V (V) traffice_jam Is respectively D current_euler_smooth ,D current_euler_normal ,D current_euler_jam ;
S55: calculate V traffice_history And V is equal to traffice_smooth ,V traffice_normal V (V) traffice_jam Is respectively D history_euler_smooth ,D history_euler_normal ,D history_euler_jam ;
S56: if D current_euler_smooth <D history_euler_smooth Judging that the flow is smooth and ending the judgmentThe method comprises the steps of carrying out a first treatment on the surface of the Otherwise, step S57 is entered;
s57: if D current_euler_normal <D history_euler_normal Judging that the speed is slow, and ending the judgment; otherwise, go to step S58;
s58: if D current_euler_jam <D history_euler_jam Judging that the speed is slow, and ending the judgment; otherwise, returning to the step 5.1 to acquire the current traffic characteristic vector V from the new traffice_current 。
Preferably, said step S2 comprises the following sub-steps:
s21: converting the vehicle image and the pedestrian image into multi-frame binary images through the target detection algorithm, fusing the binary images and the motion trail, and carrying out denoising, filling and opening and closing operations; obtaining a complete road image;
wherein the target detection algorithm comprises the following steps:
211, background modeling from the 1 st frame image of the video sequence,
creating a background sample set P { x } containing M gray values for each pixel point of the image:
P{x}={x 1 ,x 2 ,x 3 ...x M }
wherein x is i As the background gray value of the ith pixel point, any one background gray value x in the background sample set P { x } corresponding to each pixel point i The gray values of the current pixel point and the gray values of 8 pixel points in the neighborhood are randomly generated, the random generation process is circularly carried out for M times, and the initialization process of the background sample set corresponding to the current pixel point is completed;
s212: the detection method has the advantages of good detection prospect,
according to the image of the ith frame (i > 1) of the video sequence, measuring the similarity between the current pixel point and the corresponding background sample set P { x }, and defining the similarity as x i Sphere space S with circle center and radius R R (x) Sphere space S R (x) Number of intersecting samples C with background sample set P { x } # :
C # =S R {x}∩P{x};
S213: presetting an intersection threshold C #min ,C # >C #min When the current pixel point is judged to be a background point, otherwise, the current pixel point is judged to be a foreground point;
s214: the optimal segmentation threshold of the ith frame image is calculated, specifically:
s2141: assuming that the gray level of the current video image is L, the corresponding gray range is [0, L-1 ]]The number of pixel points of the whole video frame is K, and the number of pixel points with gray level i is K i Then
Thereby obtaining the probability P that the gray level of a pixel point is i i ,
Foreground region probability omega 0 The method comprises the following steps:the gray average value of the foreground area is mu 0 :
Background region probability ω 1 The method comprises the following steps:the gray average value of the foreground area is mu 1 :
Wherein L is 0 For the segmentation threshold of foreground and background, the gray level of foreground region is [0, L 0 ]The gray level of the background area is [ L ] 0 +1,L-1],ω 0 +ω 1 =1;
S2143: calculating the variance between foreground region and background region as sigma 2 :
σ 2 =ω 0 ω 1 (μ 0 -μ 1 ) 2 ,
Calculated inter-class variance sigma 2 The larger the value of (c) is, the larger the difference between the two areas is, and the better the foreground and the background can be distinguished, the maximum value is only needed to be obtained for achieving the optimal segmentation effect, and the corresponding gray value is the optimal threshold value.
S2144: determining an optimal segmentation thresholdL 0 At [0, L-1 ]]Traversing, when sigma 2 At the maximum value, L at this time 0 For the best segmentation threshold->
S215: the secondary discrimination is carried out,
randomly selecting K from pixel points of the current image random Calculating K random Gray scale average value of each pixel point
s216: performing OR operation on the background pixel points determined in the step 3 and the step 5 to obtain an accurate foreground target image;
s217: step 6, binarizing the foreground target image obtained in the step;
s218: the binarized foreground target image is subjected to cavity filling to obtain a foreground image, and the method comprises the following specific steps:
S2181: establishing an integer marking matrix D corresponding to all pixel points in a foreground target image, initializing all elements to 0, and establishing a linear sequence G which is all 0 and is used for storing seed points and points in a communication domain;
s2182: scanning the pixels of the binarized whole frame image line by line, searching the first pixel with the gray value of 255 appearing in the whole frame image, and taking the first pixel as the initial pixel S of the moving target area to be processed;
s2183: the initial pixel point S obtained by previous scanning is used as a growing seed, region growth is carried out to complete the searching process of a connected region, wherein the initial pixel point S cannot be the edge of a detection target, otherwise, the initial pixel point S is replaced by a pixel point which is not at the edge in eight adjacent regions, the initial pixel point S is stored in a linear sequence G, and the value of the corresponding position of the initial pixel point S in an integer marking matrix D is reset to be 1;
s2184: the value of each pixel point of the linear sequence G is comprehensively scanned, if data with the value of 0 exists in eight adjacent areas of the pixel points of the linear sequence G, the corresponding position in the integer marking matrix D is modified to be 2, and the peripheral outline of the current area is determined;
S2185: searching for the j eighth neighborhood pixel point with the mark value of 2 corresponding to the pixel point SOn the peripheral contour of the target area, pixels are used +.>Updating the linear sequence G, clearing other values to pixel +.>As seeds, region growth is carried out, wherein the growth rule is as follows: pixel point is taken out of the linear sequence G>Scanning the corresponding pixel points S in four adjacent areas i (i=1, 2,3, 4), and searching the gray value L of the corresponding eight neighborhood pixel points 8 Representing pixel point S i The value of the corresponding position in the integer marking matrix D.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (7)
1. An intelligent traffic control method is characterized by comprising the following steps:
s1: acquiring real-time traffic information;
s2: according to the traffic information, a vehicle image and a pedestrian image in the traffic information are obtained through a target detection algorithm; obtaining a road image through multi-frame fusion, and obtaining a space track ratio parameter through the ratio of the pixel points of the pedestrian image and the vehicle image to the pixel points of the road image;
S3: according to the space duty ratio parameter and the target detection algorithm, combining with a virtual detection coil, counting the road traffic flow and the people flow in unit time to obtain a traffic flow parameter;
s4: the key feature points of the moving vehicles and pedestrians are obtained according to the traffic flow parameters through combination of vector movement and angular point detection, and the global vector movement speed is obtained;
s5: establishing a clustering discrimination model through the global vector movement speed to obtain traffic state parameters, wherein the method comprises the following steps:
s51: according to the space duty ratio parameter c, the traffic flow parameter d and the global vector movement speed e form a current traffic characteristic vector V traffice_current =[c,d,e] T And historical traffic feature vector V traffice_history =[c,d,e] T ;
S52: at the current traffic feature vector V traffice_current In, if c>0.8, judging that the traffic is jammed, and ending the judgment; if c<0.1, judging that the flow is smooth, and ending the judgment; otherwise, step S53 is entered;
s53: by matching historical traffic characteristic vectors V traffice_history Clustering is carried out to obtain a discrimination center V in three traffic states of smoothness, slowness and congestion traffice_smooth ,V traffice_normal V (V) traffice_jam ;
S54: calculate V traffice_current And V is equal to traffice_smooth ,V traffice_normal V (V) traffice_jam Is respectively D current_euler_smooth ,D current_euler_normal ,D current_euler_jam ;
S55: calculate V traffice_history And V is equal to traffice_smooth ,V traffice_normal V (V) traffice_jam Is respectively D history_euler_smooth ,D history_euler_normal ,D history_euler_jam ;
S56: if D current_euler_smooth <D history_euler_smooth Judging that the flow is smooth, and ending the judgment; otherwise, step S57 is entered;
s57: if D current_euler_normal <D history_euler_normal Judging that the speed is slow, and ending the judgment; otherwise, go to step S58;
s58: if D current_euler_jam <D history_euler_jam Judging that the speed is slow, and ending the judgment; otherwise, returning to the step 51 to acquire the current traffic characteristic vector V again traffice_current ;
S6: obtaining a road section indication signal comprising traffic groups and traffic states of the traffic groups according to the traffic state parameters;
the method comprises the steps that a target detection algorithm is used for detecting a current vehicle image or a pedestrian image moving in a traffic video sequence, the space duty ratio parameter is the ratio of the width of a vehicle or the width of a pedestrian to the width of a road where the pedestrian is located, a virtual detection coil is arranged in a road monitoring video system, the virtual detection coil is perpendicular to a lane and close to a camera, the vehicle image passing through the detection coil is counted through the target detection algorithm, the global vector moving speed is the moving average speed of a target in the horizontal and vertical directions represented by a target vector, the clustering discrimination model is used for outputting the current traffic state, and the traffic state at least comprises one of the following steps: congestion, clear or slow;
Further comprises: the outer sealing plate, a controller and a plurality of unit plates are arranged in the outer sealing plate groove, the controller controls the unit plates to emit light, and the unit plates form a light emitting surface;
the cell plate includes: a plurality of first cell plates for road sign display, a plurality of second cell plates for road sign indication, a plurality of third cell plates for displaying traffic groups, and a fourth cell plate disposed between the first cell plates, the second cell plates, and the third cell plates;
the first unit board is internally provided with a monochromatic light source, the second unit board is internally provided with a polychromatic light source for displaying different traffic flows, the third unit board is provided with a polychromatic light source for displaying different traffic groups and different traffic groups, and the fourth unit board is not provided with a light source and/or is provided with a monochromatic light source different from the light sources of the first unit board, the second unit board and the third unit board;
the controller includes: the device comprises a processor, a clock unit, an image shooting device, a programmable logic device, a memory and a driving circuit of a light source;
the image shooting device is used for acquiring traffic information of the light-emitting device area;
The clock unit is internally provided with a crystal oscillator and a battery port and provides a synchronous clock for the controller;
the programmable logic device is used for acquiring traffic information shot by the image shooting device and sending the traffic information to the processor;
the processor generates a corresponding control signal according to the traffic information and sends the control signal to the light-emitting device;
the driving circuit is used for controlling the light emission of the light source.
2. The intelligent traffic control method according to claim 1, wherein the step S2 includes the sub-steps of:
s21: converting the vehicle image and the pedestrian image into multi-frame binary images through the target detection algorithm, fusing the binary images and the motion trail, and carrying out denoising, filling and opening and closing operations; obtaining a complete road image;
s22: by the formula:and obtaining the space track ratio parameter, wherein c is the space track ratio parameter, N is the number of video frames in unit time T, A_vehicle is the number of pixels of a vehicle image or a pedestrian image, A_road is the number of pixels of a lane image, and i is the ith frame.
3. The intelligent traffic control method according to claim 2, wherein the target detection algorithm in S21 includes:
S211, carrying out background modeling according to the 1 st frame image of the video sequence,
creating a background sample set P { x } containing M gray values for each pixel point of the image:
P{x}={x 1 ,x 2 ,x 3 ...x M }
wherein x is i As the background gray value of the ith pixel point, any one background gray value x in the background sample set P { x } corresponding to each pixel point i The gray values of the current pixel point and the gray values of 8 pixel points in the neighborhood are randomly generated, the random generation process is circularly carried out for M times, and the initialization process of the background sample set corresponding to the current pixel point is completed;
s212: the detection method has the advantages of good detection prospect,
according to the video sequence i frame (i>1) Is used for measuring the similarity between the current pixel point and the corresponding background sample set P { x }, and is defined as x } i Sphere space S with circle center and radius R R (x) Sphere space S R (x) Number of intersecting samples C with background sample set P { x } # :
C # =S R (x)∩P{x};
S213: presetting an intersection threshold C #min ,C # >C #min When the current pixel point is judged to be a background point, otherwise, the current pixel point is judged to be a foreground point;
s214: calculating an optimal segmentation threshold of the ith frame image;
s215: the secondary discrimination is carried out,
randomly selecting K from pixel points of the current image random Calculating K random Gray scale average value of each pixel point
s216: performing OR operation on the background pixel points determined in the step 3 and the step 5 to obtain an accurate foreground target image;
s217: step 6, binarizing the foreground target image obtained in the step;
s218: and filling the cavity of the binarized foreground target image to obtain a foreground image.
4. The intelligent traffic control method according to claim 3, wherein the S214 includes:
s2141: assuming that the gray level of the current video image is L, the corresponding gray range is [0, L-1 ]]The number of pixel points of the whole video frame is K, and the number of pixel points with gray level i is K i Then
Thereby obtaining the probability P that the gray level of a pixel point is i i ,
Foreground region probability omega 0 The method comprises the following steps:the gray average value of the foreground area is mu 0 :
Background region probability ω 1 The method comprises the following steps:the gray average value of the foreground area is mu 1 :
Wherein L is 0 For the segmentation threshold of foreground and background, the gray level of foreground region is [0, L 0 ]The gray level of the background area is [ L ] 0 +1,L-1],ω 0 +ω 1 =1;
S2143: calculating the variance between foreground region and background region as sigma 2 :
σ 2 =ω 0 ω 1 (μ 0 -μ 1 ) 2 ,
Calculated inter-class variance sigma 2 The larger the value of the (c) is, the larger the difference between the two areas is, and the better the foreground and the background can be distinguished, the best dividing effect is required to be achieved, and the corresponding gray value is the best threshold value;
5. The intelligent traffic control method according to claim 3, wherein the S218 includes:
s2181: establishing an integer marking matrix D corresponding to all pixel points in a foreground target image, initializing all elements to 0, and establishing a linear sequence G which is all 0 and is used for storing seed points and points in a communication domain;
s2182: scanning the pixels of the binarized whole frame image line by line, searching the first pixel with the gray value of 255 appearing in the whole frame image, and taking the first pixel as the initial pixel S of the moving target area to be processed;
s2183: the initial pixel point S obtained by previous scanning is used as a growing seed, region growth is carried out to complete the searching process of a connected region, wherein the initial pixel point S cannot be the edge of a detection target, otherwise, the initial pixel point S is replaced by a pixel point which is not at the edge in eight adjacent regions, the initial pixel point S is stored in a linear sequence G, and the value of the corresponding position of the initial pixel point S in an integer marking matrix D is reset to be 1;
S2184: the value of each pixel point of the linear sequence G is comprehensively scanned, if data with the value of 0 exists in eight adjacent areas of the pixel points of the linear sequence G, the corresponding position in the integer marking matrix D is modified to be 2, and the peripheral outline of the current area is determined;
s2185: searching for the j eighth neighborhood pixel point with the mark value of 2 corresponding to the pixel point SOn the peripheral contour of the target area, pixels are used +.>Updating the linear sequence G, clearing other values to pixel +.>As seeds, region growth is carried out, wherein the growth rule is as follows: pixel point is taken out of the linear sequence G>Scanning the corresponding pixel points S in four adjacent areas i (i=1, 2,3, 4), and searching the gray value L of the corresponding eight neighborhood pixel points 8 Representing pixel point S i The value of the corresponding position in the integer marking matrix D.
6. The intelligent traffic control method according to claim 1, wherein the S3 includes:
s31: setting a virtual detection coil perpendicular to a road in a road monitoring system, and counting vehicle images and/or pedestrian images passing through the virtual detection coil by utilizing the target detection algorithm;
s32: data initialization, determining a unit time T, obtaining a video frame number N=T×f, wherein f represents a video frame rate, the initial value of the number of vehicles and/or the number of pedestrians N_vehicle is 0, and whether the ith frame of vehicles and/or pedestrians has a judgment result J_vehicle or not is obtained i The initial value is 0, i=0;
s33: calculating the judging result J_vehicle of the ith frame of vehicle and/or pedestrian in the current virtual detection coil i The method comprises the following steps:
wherein A_refresh i Updating the pixel point number for the detection coil area of the ith frame, wherein A_threshold is the threshold value of the updated pixel point;
s34: if J_level of the i-th frame i If=0, the number is not counted, n_cycle=n_cycle, and the process proceeds to step S37;
s35: if J_level of the i-th frame i =1, and j_vecicle of the i-1 th frame i-1 If =0, count, n_cycle=n_cycle+1, go to step S37;
s36: if J_level of the i-th frame i =1, and j_vecicle of the i-1 th frame i-1 If=1, the number is not counted, n_cycle=n_cycle+1, and the process proceeds to step S37;
s37: if i > N, ending the detection count, outputting the number of vehicles and/or the number of pedestrians N_vehicle, and performing step S38; otherwise, i=i+1, returning to step S33;
7. the intelligent traffic control method according to any one of claims 1 to 6, wherein S4 includes:
s41: the motion characteristic points are obtained specifically as follows:
s411: selecting pixel points at (x, y), and calculating x and y direction movement speeds at (x, y) as follows:
Wherein u is i And u i-1 The x-direction movement speeds of the ith frame and the i-1 th frame are respectively v i And v i-1 The motion speeds in the y direction of the ith frame and the ith-1 frame are respectively I x I is the change rate of the gray scale of the image along the x direction y I is the change rate of the gray scale of the image along with the y direction t The change rate of the gray scale of the image along with time t is shown, and lambda is Lagrangian constant;
s412: if it isAnd i is less than or equal to N_iteration, i=i+1, returning to step S411 to continue iterating the current pixel point, wherein G_threshold is a difference threshold, and N_iteration is an iteration number threshold;
s413: if it isAnd i is less than or equal to N_iteration, selecting the current (x, y) as a motion characteristic point, ending iteration, returning to the step S411, selecting other pixel points for calculation, and judging whether the current (x, y) is the motion characteristic point;
s414: if i > n_iteration, the current (x, y) is not the motion feature point, the iteration is ended, i=0, the step S411 is returned, other pixel points are selected for calculation, and whether the motion feature point is determined;
s415: repeating the steps S411 to S414 until all the motion feature points are acquired;
s42: detecting local extreme points by adopting angular point detection;
s421: processing pixel points in the image, calculating horizontal and vertical gradients, and calculating the product of the horizontal and vertical gradients;
S422: filtering the image by adopting a Gaussian filter and smoothing noise interference;
s423: calculating an interest value for each pixel point in the image;
s424: repeating the steps S421 to S423 until all the local extreme points are obtained;
s43: according to the motion feature points and the local extreme points, overlapping pixel points are obtained to form key feature points (x key ,y key );
S44: the global vector motion velocity is calculated and,
s441: the direction of motion of the principal vector is determined,
s442: according to the key feature points (x key ,y key ) And the main vector motion direction, and obtaining a feature point (x 'of the main vector motion direction' key ,y' key );
S443: the global vector motion speed e is calculated and,
wherein e is the global vector motion speed,and->The average value of vector motion speeds in the horizontal direction and the vertical direction respectively, N_key is the total number of feature points in the motion direction of the main vector, j is the number of feature points in the motion direction of the main vector, and u j (x' key ,y' key ) And v j (x' key ,y' key ) Is (x' key ,y' key ) At x and y direction movement speeds. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110450307.1A CN113140110B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic control method, lighting device and monitoring device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010367593.0A CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
CN202110450307.1A CN113140110B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic control method, lighting device and monitoring device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010367593.0A Division CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113140110A CN113140110A (en) | 2021-07-20 |
CN113140110B true CN113140110B (en) | 2023-06-09 |
Family
ID=71906756
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010367593.0A Active CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
CN202110450307.1A Active CN113140110B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic control method, lighting device and monitoring device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010367593.0A Active CN111524376B (en) | 2020-04-30 | 2020-04-30 | Intelligent traffic indication light-emitting device, monitoring device, system and method |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN111524376B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113017687A (en) * | 2021-02-19 | 2021-06-25 | 上海长征医院 | Automatic identification method for B-ultrasonic image of abdominal dropsy |
CN114143940B (en) * | 2022-01-30 | 2022-09-16 | 深圳市奥新科技有限公司 | Tunnel illumination control method, device, equipment and storage medium |
CN114820931B (en) * | 2022-04-24 | 2023-03-24 | 江苏鼎集智能科技股份有限公司 | Virtual reality-based CIM (common information model) visual real-time imaging method for smart city |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004030484A (en) * | 2002-06-28 | 2004-01-29 | Mitsubishi Heavy Ind Ltd | Traffic information providing system |
CN102136194A (en) * | 2011-03-22 | 2011-07-27 | 浙江工业大学 | Road traffic condition detection device based on panorama computer vision |
CN108417057A (en) * | 2018-05-15 | 2018-08-17 | 哈尔滨工业大学 | A kind of intelligent signal lamp timing system |
CN108961756A (en) * | 2018-07-26 | 2018-12-07 | 深圳市赛亿科技开发有限公司 | A kind of automatic real-time traffic vehicle flowrate, people flow rate statistical method and system |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07326210A (en) * | 1994-05-30 | 1995-12-12 | Matsushita Electric Works Ltd | Tunnel lamp control device |
CN101702263B (en) * | 2009-11-17 | 2011-04-06 | 重庆大学 | Pedestrian crosswalk signal lamp green wave self-adaption control system and method |
CN202473149U (en) * | 2012-03-09 | 2012-10-03 | 云南路翔市政工程有限公司 | Light-guide type active light emitting signboard |
CN103150915A (en) * | 2013-02-05 | 2013-06-12 | 林祥兴 | Integral traffic information display device |
CN103646241B (en) * | 2013-12-30 | 2017-01-18 | 中国科学院自动化研究所 | Real-time taxi identification method based on embedded system |
CN203673792U (en) * | 2014-01-03 | 2014-06-25 | 云南路翔市政工程有限公司 | Assembly-type LED variable information board |
CN105263026B (en) * | 2015-10-12 | 2018-04-17 | 西安电子科技大学 | Global vector acquisition methods based on probability statistics and image gradient information |
CN105809984A (en) * | 2016-06-02 | 2016-07-27 | 西安费斯达自动化工程有限公司 | Traffic signal control method based on image detection and optimal velocity model |
CN105809992A (en) * | 2016-06-02 | 2016-07-27 | 西安费斯达自动化工程有限公司 | Traffic signal control method based on image detection and full velocity difference model |
CN106710261A (en) * | 2017-03-07 | 2017-05-24 | 翁小翠 | Intelligent traffic indicating device |
CN107886739A (en) * | 2017-10-16 | 2018-04-06 | 王宁 | Traffic flow of the people automatic collecting analysis system |
CN108320540A (en) * | 2018-01-30 | 2018-07-24 | 江苏瑞沃建设集团有限公司 | A kind of intelligent city's traffic lights of annular |
CN108877234B (en) * | 2018-07-24 | 2021-03-26 | 河北德冠隆电子科技有限公司 | Four-dimensional real-scene traffic simulation vehicle illegal lane occupation tracking detection system and method |
CN108961782A (en) * | 2018-08-21 | 2018-12-07 | 北京深瞐科技有限公司 | Traffic intersection control method and device |
CN109359563B (en) * | 2018-09-29 | 2020-12-29 | 江南大学 | Real-time lane occupation phenomenon detection method based on digital image processing |
CN209260596U (en) * | 2018-11-21 | 2019-08-16 | 方显峰 | A kind of long-persistence luminous raised terrestrial reference |
CN109493616A (en) * | 2018-12-06 | 2019-03-19 | 江苏华体照明科技有限公司 | Intelligent traffic lamp |
-
2020
- 2020-04-30 CN CN202010367593.0A patent/CN111524376B/en active Active
- 2020-04-30 CN CN202110450307.1A patent/CN113140110B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004030484A (en) * | 2002-06-28 | 2004-01-29 | Mitsubishi Heavy Ind Ltd | Traffic information providing system |
CN102136194A (en) * | 2011-03-22 | 2011-07-27 | 浙江工业大学 | Road traffic condition detection device based on panorama computer vision |
CN108417057A (en) * | 2018-05-15 | 2018-08-17 | 哈尔滨工业大学 | A kind of intelligent signal lamp timing system |
CN108961756A (en) * | 2018-07-26 | 2018-12-07 | 深圳市赛亿科技开发有限公司 | A kind of automatic real-time traffic vehicle flowrate, people flow rate statistical method and system |
Also Published As
Publication number | Publication date |
---|---|
CN113140110A (en) | 2021-07-20 |
CN111524376B (en) | 2021-05-14 |
CN111524376A (en) | 2020-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113140110B (en) | Intelligent traffic control method, lighting device and monitoring device | |
CN102074113B (en) | License tag recognizing and vehicle speed measuring method based on videos | |
US9704060B2 (en) | Method for detecting traffic violation | |
CN100449579C (en) | All-round computer vision-based electronic parking guidance system | |
CN104751634B (en) | The integrated application method of freeway tunnel driving image acquisition information | |
CN102867417B (en) | Taxi anti-forgery system and taxi anti-forgery method | |
CN111553201A (en) | Traffic light detection method based on YOLOv3 optimization algorithm | |
CN111339905B (en) | CIM well lid state visual detection system based on deep learning and multiple visual angles | |
CN103268470B (en) | Object video real-time statistical method based on any scene | |
US7619648B2 (en) | User assisted customization of automated video surveillance systems | |
CN116824859B (en) | Intelligent traffic big data analysis system based on Internet of things | |
CN109817013A (en) | Parking stall state identification method and device based on video flowing | |
CN102147971A (en) | Traffic information acquisition system based on video image processing technology | |
CN104851288B (en) | Traffic light positioning method | |
CN109272482B (en) | Urban intersection vehicle queuing detection system based on sequence images | |
Taha et al. | Day/night detector for vehicle tracking in traffic monitoring systems | |
CN110633678A (en) | Rapid and efficient traffic flow calculation method based on video images | |
CN117953445B (en) | Road visibility measuring method, system and medium based on traffic monitoring camera in rainy days | |
Minnikhanov et al. | Detection of traffic anomalies for a safety system of smart city | |
Zhou et al. | Street-view imagery guided street furniture inventory from mobile laser scanning point clouds | |
US20230417912A1 (en) | Methods and systems for statistical vehicle tracking using lidar sensor systems | |
CN112633249A (en) | Embedded pedestrian flow detection method based on light deep learning framework | |
CN105262984B (en) | A kind of detector with fixing device | |
CN104091402B (en) | Distinguishing system and distinguishing method of multi-state alarm color-changing lamp of 24V power supply cabinet | |
CN202887450U (en) | Taxi anti-fake system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |