CN118155434B - Remote visual dynamic indication control method and device - Google Patents

Remote visual dynamic indication control method and device Download PDF

Info

Publication number
CN118155434B
CN118155434B CN202410558345.2A CN202410558345A CN118155434B CN 118155434 B CN118155434 B CN 118155434B CN 202410558345 A CN202410558345 A CN 202410558345A CN 118155434 B CN118155434 B CN 118155434B
Authority
CN
China
Prior art keywords
vehicle
signal lamp
data
traffic
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410558345.2A
Other languages
Chinese (zh)
Other versions
CN118155434A (en
Inventor
牛云玲
高立伟
刘军
邵全利
李静
纪飞
王振华
张志雁
荣庆宇
王盼
孙媛媛
马秋艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Leading Intelligent Transportation Technology Co ltd
Original Assignee
Shandong Leading Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Leading Intelligent Transportation Technology Co ltd filed Critical Shandong Leading Intelligent Transportation Technology Co ltd
Priority to CN202410558345.2A priority Critical patent/CN118155434B/en
Publication of CN118155434A publication Critical patent/CN118155434A/en
Application granted granted Critical
Publication of CN118155434B publication Critical patent/CN118155434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/08Controlling traffic signals according to detected number or speed of vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a remote visual dynamic indication control method and a device, which relate to the technical field of data processing, and the method comprises the following steps: processing the working state data to obtain the running state of the current signal lamp; acquiring vehicle data and a vehicle target from a simulation environment; extracting feature vectors from the vehicle target using a feature extractor; according to the feature vector, calculating the similarity of the vehicle targets in adjacent time points in the simulation environment, and performing association matching to obtain vehicle tracking data; estimating traffic flow, driving modes and vehicle queuing distances of all intersections according to the vehicle tracking data and the running state of the current signal lamp; and dynamically adjusting the working parameters of the traffic signal lamp according to the queuing distance of the vehicle and the traffic flow parameters. The invention can estimate traffic flow and driving mode, thereby dynamically adjusting the working parameters of the traffic signal lamp and improving the fluency and safety of urban traffic.

Description

Remote visual dynamic indication control method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a remote visual dynamic indication control method and device.
Background
With the increasing heavy and complex urban traffic, there is an increasing need for traffic management and control. The traditional traffic signal lamp control system mainly depends on a preset fixed time control mode, and cannot be adjusted in real time according to the change of traffic flow, so that the fluency and efficiency of urban traffic are limited to a great extent.
In order to solve this problem, researchers have recently begun to search for intelligent traffic light control systems. However, existing intelligent systems, while capable of collecting and processing traffic data in real time, are not ideal in their control due to lack of accurate vehicle tracking and traffic flow prediction capabilities, particularly during peak traffic periods where the risk of vehicle congestion and traffic accidents increases significantly.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a remote visual dynamic indication control method and a device, which can estimate traffic flow and driving modes, thereby dynamically adjusting working parameters of traffic signals and improving fluency and safety of urban traffic.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, a remote visual dynamic indication control method includes:
constructing a traffic signal lamp simulation environment;
Receiving working state data sent by a traffic signal lamp system;
processing the working state data to obtain the running state of the current signal lamp;
Acquiring vehicle data and a vehicle target from a simulation environment;
Extracting feature vectors from the vehicle target using a feature extractor;
According to the feature vector, calculating the similarity of the vehicle targets in adjacent time points in the simulation environment, and performing association matching to obtain vehicle tracking data;
Estimating traffic flow, driving modes and vehicle queuing distances of all intersections according to the vehicle tracking data and the running state of the current signal lamp;
and dynamically adjusting the working parameters of the traffic signal lamp according to the queuing distance of the vehicle and the traffic flow parameters.
Further, constructing a traffic signal lamp simulation environment, including:
Determining the requirements and targets of a simulation environment;
obtaining a topological graph of the traffic network according to the requirements and the targets of the simulation environment;
Initializing a particle swarm, wherein each particle represents a signal lamp timing scheme;
According to traffic flow, delay time and parking times, determining and evaluating an adaptability function of the signal lamp timing scheme, wherein a calculation formula of the adaptability function is as follows:
Wherein, the method comprises the steps of, wherein, A value representing the fitness function is presented,Representing the total delay time of all vehicles during simulation, the delay time refers to the difference between the actual time the vehicle waits at the signal lamp and the time required by the ideal unobstructed crossing,Indicating the maximum total delay time to be achieved,Indicating the total number of stops of all vehicles during the simulation,Indicating the maximum total number of stops that will be made,Indicating the total time the vehicle is in a congested state in the traffic network, i.e. the sum of the times that there is a vehicle waiting in line to pass the traffic light,The total time of the simulation is represented,The sum of the effective green time of all the signal lamps during the simulation, i.e. the time when the signal lamp lights up to green and the vehicle passes,Representing the sum of green time of all signal lamps during simulation, w 1,w2,w3 and w 4 are weight coefficients of various indexes respectively, and w 1+w2+w3+w4 =1;
continuously iterating, updating the speed and the position of the particles to obtain a final signal lamp timing scheme, stopping iterating when a termination condition is reached, and outputting the final signal lamp timing scheme;
and establishing a traffic signal lamp simulation environment according to the topology diagram of the traffic network and the final signal lamp timing scheme.
Further, the topological graph of the traffic network comprises intersections, road sections and connection relations among the road sections; the signal lamp timing scheme comprises signal period and green light time parameters of each intersection.
Further, the processing the working state data to obtain the current running state of the signal lamp includes:
Sequencing the working state data according to the time stamp t i, traversing the sequenced working state data, and calculating the difference value of the time stamps of two adjacent data points for any two adjacent data points, wherein the calculation formula of the difference value of the time stamps of the two adjacent data points is as follows:
Wherein, the method comprises the steps of, wherein, Representing the difference in time stamps of two adjacent data points,Representing the global weighting factor(s),The local weighting factor is represented as such,The time correction factor is represented as such,Represents the average window size for computing the timestamp difference smoothing term,Representing the weighting factor of the additional correction term, m representing the average window size used to calculate the additional correction term,An additional correction term representing at a point in time k, i representing the current data point, j and k being indexes for traversing past data points, a smoothing term and an additional correction term for calculating a time stamp difference, respectively;
If the difference value of the time stamps of the two adjacent data points is smaller than a preset threshold value, the two adjacent data points are close, and any data point is removed to obtain processed working state data;
Analyzing the processed working state data according to a data protocol of the traffic signal lamp system, and extracting key information related to the running state of the signal lamp;
locating an identifier representing the current state of the signal lamp from the key information;
The identifier is mapped to a specific signal lamp state to obtain the current signal lamp operating state.
Further, obtaining vehicle data and vehicle targets from the simulation environment includes:
acquiring image data of a vehicle through a data capturing point in a simulation environment;
Constructing a background model through the previous N frames of images, calculating the average value of the position pixel values in the previous N frames of images for each pixel position, and taking the average value of the position pixel values as the value of the corresponding pixel in the background model;
for each newly captured image of each frame, traversing each pixel in the image, calculating the difference value between each pixel in the current frame and the corresponding pixel in the background model, and marking the pixel as a foreground pixel if the difference value is larger than a preset threshold value;
performing expansion processing on foreground pixels by using a 3×3 rectangle until the preset iteration times are reached, so as to obtain an expanded foreground region;
and analyzing each independent connected region according to the expanded foreground region to determine the mass center of each connected region, and taking the mass center as the position of the vehicle in the current frame.
Further, extracting feature vectors from the vehicle target using the feature extractor includes:
inputting a vehicle target image into the trained convolutional neural network model;
obtaining a characteristic diagram of a convolutional neural network model through forward propagation calculation;
The feature map is flattened to convert the multi-dimensional feature map into one-dimensional feature vectors.
Further, the image data of the vehicle includes a current time stamp, a position of the vehicle in the simulation environment, and a speed of the vehicle.
In a second aspect, a remote visual dynamic indication control device includes:
The acquisition module is used for constructing a traffic signal lamp simulation environment; receiving working state data sent by a traffic signal lamp system; processing the working state data to obtain the running state of the current signal lamp; acquiring vehicle data and a vehicle target from a simulation environment; extracting feature vectors from the vehicle target using a feature extractor;
The processing module is used for calculating the similarity of the vehicle targets in adjacent time points in the simulation environment according to the feature vectors and carrying out association matching so as to obtain vehicle tracking data; estimating traffic flow, driving modes and vehicle queuing distances of all intersections according to the vehicle tracking data and the running state of the current signal lamp; and dynamically adjusting the working parameters of the traffic signal lamp according to the queuing distance of the vehicle and the traffic flow parameters.
In a third aspect, a computing device includes:
One or more processors;
and a storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method.
In a fourth aspect, a computer readable storage medium has a program stored therein, which when executed by a processor, implements the method.
The scheme of the invention at least comprises the following beneficial effects:
By constructing the simulation environment, the traffic signal lamp control system can be tested and optimized under the condition that actual traffic is not affected, and experimental cost and risk are reduced. The working state data of the signal lamp is received in real time, so that the control system can be ensured to make decisions based on the latest information all the time, and the timeliness and the accuracy of traffic management are improved. By processing the working state data, the control system can accurately grasp the current running state of each signal lamp. The detailed vehicle data and the vehicle target information are acquired, so that the traffic condition can be analyzed more accurately, and the timing scheme of the signal lamp can be optimized. The feature extraction can help the system to more effectively identify and track the vehicle, continuous tracking of the vehicle can be realized through similarity calculation and association matching of the vehicle targets, and the prediction of traffic flow, driving mode and vehicle queuing distance is beneficial to timely finding out potential traffic jam points. The working parameters of the signal lamp are dynamically adjusted, so that traffic jam can be effectively relieved, traffic efficiency is improved, meanwhile, the occurrence of traffic accidents is reduced, and overall traffic safety is improved.
Drawings
Fig. 1 is a schematic flow chart of a remote visual dynamic indication control method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a remote visual dynamic indication control system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention proposes a remote visual dynamic indication control method, which includes the following steps:
Step 11, constructing a traffic signal lamp simulation environment;
Step 12, receiving working state data sent by a traffic signal lamp system;
Step 13, processing the working state data to obtain the running state of the current signal lamp;
step 14, acquiring vehicle data and vehicle targets from the simulation environment;
Step 15, extracting feature vectors from the vehicle targets by using a feature extractor;
Step 16, calculating the similarity of the vehicle targets in adjacent time points in the simulation environment according to the feature vectors, and performing association matching to obtain vehicle tracking data;
Step 17, estimating traffic flow, driving mode and vehicle queuing distance of each intersection according to the vehicle tracking data and the running state of the current signal lamp;
And step 18, dynamically adjusting the working parameters of the traffic signal lamp according to the queuing distance of the vehicle and the traffic flow parameters.
In the embodiment of the invention, step 12, the working state data of the signal lamp is acquired in real time, so that the control system can make a decision based on the latest and accurate information, and the response speed and accuracy of the control system are improved. And 13, processing and analyzing the working state data of the signal lamp, so that the real-time running condition of the traffic signal lamp system can be known more accurately. And step 14, acquiring detailed vehicle data and target information, so as to be helpful for more comprehensively knowing traffic conditions, including vehicle flow, vehicle speed and the like. And step 15, feature extraction is helpful for simplifying data and highlighting key information, so that subsequent vehicle tracking and recognition are more accurate and efficient. In step 16, accurate tracking of the vehicle can be achieved by calculating the similarity and the associated matches. And step 17, potential traffic jam points can be found in advance by estimating traffic flow, running mode and vehicle queuing distance, so that important basis is provided for optimizing and adjusting traffic signals, and the traffic pressure can be relieved. And step 18, dynamically adjusting the working parameters of the signal lamp can respond to the change of traffic conditions in real time, effectively relieve traffic jam and improve road traffic efficiency.
In a preferred embodiment of the present invention, the step 11 may include:
step 111, determining the requirements and targets of the simulation environment;
step 112, obtaining a topological graph of the traffic network according to the requirements and targets of the simulation environment;
Step 113, initializing a particle swarm, wherein each particle represents a signal lamp timing scheme;
Step 114, determining an adaptability function for evaluating the signal lamp timing scheme according to the traffic flow, delay time and parking times, wherein a calculation formula of the adaptability function is as follows:
Wherein, the method comprises the steps of, wherein, A value representing the fitness function is presented,Representing the total delay time of all vehicles during simulation, the delay time refers to the difference between the actual time the vehicle waits at the signal lamp and the time required by the ideal unobstructed crossing,Indicating the maximum total delay time to be achieved,Indicating the total number of stops of all vehicles during the simulation,Indicating the maximum total number of stops that will be made,Indicating the total time the vehicle is in a congested state in the traffic network, i.e. the sum of the times that there is a vehicle waiting in line to pass the traffic light,The total time of the simulation is represented,The sum of the effective green time of all the signal lamps during the simulation, i.e. the time when the signal lamp lights up to green and the vehicle passes,Representing the sum of green time of all signal lamps during simulation, w 1,w2,w3 and w 4 are weight coefficients of various indexes respectively, and w 1+w2+w3+w4 =1;
Step 115, iterating continuously, updating the speed and the position of the particles to obtain a final signal lamp timing scheme, stopping iterating when the termination condition is reached, and outputting the final signal lamp timing scheme;
Step 116, establishing a traffic signal lamp simulation environment according to the topology diagram of the traffic network and the final signal lamp timing scheme; the topological graph of the traffic network comprises intersections, road sections and connection relations among the road sections; the signal lamp timing scheme comprises signal period and green light time parameters of each intersection.
In the embodiment of the invention, in step 111, the determination of the requirements and the targets can ensure the pertinence and the practicability of the construction of the simulation environment. Step 112, by obtaining the actual traffic network topology map, the traffic condition can be more truly simulated, and the accuracy and the reliability of the simulation result are improved. Step 113, initializing a signal lamp timing scheme by using a particle swarm optimization algorithm, which is helpful for searching an optimal solution in a global scope and improving the optimization efficiency of signal timing. Step 114, by comprehensively considering a plurality of traffic performance indexes (such as traffic flow, delay time, parking times and the like), a comprehensive fitness function is formulated, and the advantages and disadvantages of the signal lamp timing scheme can be evaluated more scientifically. Step 115, by continuously iterating and updating the speed and position of the particles, the optimal signal lamp timing scheme can be gradually approximated, and the operation efficiency of the traffic network is improved. Step 116, a simulation environment is established based on the real traffic network topology map and the optimized signal lamp timing scheme, so that powerful decision support can be provided for subsequent traffic management and planning.
For example, assuming that traffic congestion is frequently occurring at a heavy traffic intersection in a city, it is necessary to improve traffic conditions by optimizing signal timing. The method can be realized according to the following steps:
the requirements and the targets are determined, the targets are to reduce traffic jam and improve vehicle passing efficiency, and the requirements are to establish a simulation environment capable of simulating traffic conditions of the intersection and optimize a signal lamp timing scheme. And obtaining a network topological graph of the intersection and surrounding roads thereof by field investigation or map data, wherein the network topological graph comprises the intersection, the road sections and the connection relations of the road sections.
A certain number of particles (namely a signal lamp timing scheme) are set, and each particle comprises parameters such as green light time, signal period and the like of each direction of the intersection. And formulating a fitness function according to indexes such as traffic flow, delay time, parking times and the like. For example, a weight coefficient w 1=0.4,w2=0.3,w3=0.2,w4 =0.1 may be set to balance the importance of the various indices. And (3) continuously and iteratively updating the speed and the position of the particles by using a particle swarm optimization algorithm, namely adjusting parameters of each signal lamp timing scheme to maximize the value of the fitness function, recording and comparing the fitness value of each particle in the iteration process, and selecting the optimal particle as the final signal lamp timing scheme. Based on the optimized signal lamp timing scheme and the traffic network topological graph, a traffic signal lamp simulation environment is established by using simulation software (such as SUMO, VISSIM and the like); and simulating actual traffic conditions in a simulation environment, and observing and recording the change conditions of indexes such as traffic flow, delay time, parking times and the like. And analyzing simulation results, comparing traffic performance index change conditions before and after optimization, verifying the effectiveness of the optimization effect, and if the optimization effect is not obvious or does not reach an expected target, adjusting parameters such as a weight coefficient of an fitness function or increasing iteration times and the like to further optimize.
In another preferred embodiment of the present invention, the step 12 may include:
Step 121, configuring a server or middleware to establish a real-time data communication connection with the traffic light system; ensuring that the communication protocol is compatible with the traffic light system, for example, using TCP/IP, MQTT and other protocols for data transmission; negotiating with the traffic light system and determining the format and content of the operational status data, such as JSON, XML, etc., determining the frequency of data transmission, e.g., transmitting data once per second or minute; and continuously monitoring and receiving working state data from the traffic light system through the established communication connection, and performing integrity verification and error detection on the received data to ensure the accuracy and reliability of the data. And storing the received effective data into a local database or cloud storage, and implementing a data backup strategy to prevent data loss or damage. Through the above-embodied steps, step 12 is able to effectively receive and process operational status data from the traffic light system.
In a preferred embodiment of the present invention, the step 13 may include:
Step 131, sorting the working state data according to the time stamp t i, traversing the sorted working state data, and calculating the difference value of the time stamps of two adjacent data points for any two adjacent data points, wherein the calculation formula of the difference value of the time stamps of the two adjacent data points is as follows:
Wherein, the method comprises the steps of, wherein, Representing the difference in time stamps of two adjacent data points,Representing the global weighting factor(s),The local weighting factor is represented as such,The time correction factor is represented as such,Represents the average window size for computing the timestamp difference smoothing term,Representing the weighting factor of the additional correction term, m representing the average window size used to calculate the additional correction term,An additional correction term representing at a point in time k, i representing the current data point, j and k being indexes for traversing past data points, a smoothing term and an additional correction term for calculating a time stamp difference, respectively;
Step 132, if the difference value of the time stamps of two adjacent data points is smaller than a preset threshold value, the two adjacent data points are close, and any data point is removed to obtain the processed working state data;
Step 133, analyzing the processed working state data according to the data protocol of the traffic signal lamp system, and extracting key information related to the running state of the signal lamp;
step 134, locating an identifier representing the current state of the signal lamp from the key information;
Step 135, mapping the identifier to a specific signal light status to obtain the current signal light running status.
In the embodiment of the invention, through the step 131 and the step 132, the received working state data can be effectively cleaned, and redundant data points with too close time stamps are removed, so that the accuracy and the processing efficiency of the data are improved. And a plurality of weighting factors and correction terms are set, so that the algorithm can dynamically adjust the calculation mode of the time stamp difference value according to different conditions, and the algorithm is better suitable for different traffic conditions and characteristics of a signal lamp system. Through steps 133 to 135, the operating state of the signal lamp can be extracted from the complex operating state data quickly and accurately.
In another preferred embodiment of the present invention, the step 132 may include: a threshold value of the time stamp difference value is set, which is determined based on the actual application scenario and the need of the data update frequency, for example, if the data update frequency of the signal lamp should be once every second, the threshold value may be set to 1 second. For the sorted operational state data, the adjacent data points are traversed, the difference of the time stamps is calculated, if the difference of the time stamps of the adjacent two data points is smaller than a preset threshold value, the two data points are too close in time and possibly caused by repeated transmission or systematic errors, and in this case, any data point (usually the last data point) can be selected to be removed, so that the data redundancy is reduced.
In another preferred embodiment of the present invention, the step 133 may include: a data protocol of the traffic light system is acquired, and the protocol defines the format of the data packet, the meaning of each field and the coding mode of the data. According to the data protocol, the processed working state data is parsed, which involves converting the data from binary or other encoding formats to readable, structured data. And extracting key information related to the running state of the signal lamp from the analyzed data, wherein the key information comprises the current color, countdown time, control mode and the like of the signal lamp. The step 134 may include: in the extracted key information, an identifier representing the current state of the signal is located, which is typically a specific field or code defined in the data protocol. Confirm that the found identifier is valid and does represent the status of the signal light. The step 135 may include: a mapping is established from the status identifiers to specific traffic light status according to the specifications or data protocols of the traffic light system, e.g., one identifier may represent a "red light" and another identifier may represent a "green light". The mapping table is used to map the status identifier located in step 134 to a specific signal status, so that the actual running status (e.g., red, green, yellow, etc.) of the current signal can be obtained. And finally, outputting the mapped signal lamp state as a result. Through the steps, useful information can be extracted from the original working state data, and the current running state of the traffic signal lamp can be known.
In a preferred embodiment of the present invention, the step 14 may include:
step 141, acquiring image data of a vehicle through a data capturing point in a simulation environment, wherein the image data of the vehicle comprises a current time stamp, the position of the vehicle in the simulation environment and the speed of the vehicle;
step 142, constructing a background model through the previous N frames of images, calculating the average value of the position pixel values in the previous N frames of images for each pixel position, and taking the average value of the position pixel values as the value of the corresponding pixel in the background model;
Step 143, for each newly captured image of each frame, traversing each pixel in the image, calculating a difference value between each pixel in the current frame and a corresponding pixel in the background model, and if the difference value is greater than a preset threshold value, marking the pixel as a foreground pixel;
step 144, performing expansion processing on the foreground pixels by using a3×3 rectangle until reaching a preset iteration number, so as to obtain an expanded foreground region;
and 145, analyzing each independent communication area according to the expanded foreground area to determine the mass center of each communication area, and taking the mass center as the position of the vehicle in the current frame.
In the embodiment of the invention, the foreground pixels, namely the dynamically changing vehicle parts, can be accurately identified by constructing the background model and comparing the difference between the newly captured image and the background model, thereby being beneficial to accurately extracting the vehicle position and reducing the situations of false detection and omission. By performing expansion processing on the foreground pixels, the method can better adapt to the change of the shape of the vehicle and the influence of illumination conditions, improve the robustness of vehicle detection, and fill the holes in the vehicle, which are possibly caused by factors such as shadow, reflection and the like, so that the detected vehicle area is more complete. The vehicle detection can be completed in a shorter time through pixel-by-pixel comparison and simple morphological processing, and the method is suitable for a real-time monitoring system, and has higher real-time performance because complex calculation or a large number of parameter adjustment are not needed. By determining the centroid of each connected region as the position of the vehicle in the current frame, the centroid serves as a compact and effective feature point, which is beneficial to reducing the complexity of data processing and improving the tracking accuracy.
In another preferred embodiment of the present invention, the step 141 may include: data capture points are arranged at key positions of the simulation environment, such as intersections, traffic lanes or specific observation areas, and the positions are usually areas with frequent vehicle flow or complex traffic conditions, so that rich vehicle driving data can be collected conveniently. The necessary parameters are configured for each capture point, including a capture range, a trigger condition, a data record format and the like, wherein the capture range defines a space region capable of triggering the data record, and the trigger condition can be that a vehicle enters the capture range, reaches a specific speed or meets other custom conditions. Determining the content and the format of data to be recorded when each trigger is triggered, wherein the data to be recorded comprises the following components according to the description:
Timestamp: the exact time that the vehicle triggered the capture point is recorded, typically using a UNIX timestamp (i.e., from 2023, 1 month, 1 day, 00:00 UTC to the number of seconds now) or a specific date and time format.
Vehicle position: the exact location in the simulation environment is typically represented in the form of two-dimensional or three-dimensional coordinates. For example, on a two-dimensional plane, the position of the vehicle may be represented using (x, y) coordinates; in three dimensions, then (x, y, z) coordinates are required.
Vehicle speed: the instantaneous speed of the vehicle at the time of triggering the capture point is recorded, and the unit may be kilometers per hour, meters per second, etc. The speed may be a scalar value representing the size and direction of the vehicle (or as a vector if direction information is required).
And activating the capturing point to start monitoring and recording the passing vehicle data, and storing related information according to a preset data recording format once the vehicle meets the triggering condition. The data recorded by the capture points can be transmitted to a central server or a local storage system for storage in real time.
For example, assuming a data capture point is set at an intersection, as a vehicle passes through the intersection, the capture point records the following data: timestamp: 2023-04-01 12:35:40 (hypothetical time format); vehicle position: (100, 200) (assumed two-dimensional coordinates representing a specific position of the vehicle at the intersection); vehicle speed: 60 km/h (assumed speed value, representing the speed of the vehicle as it passes through the intersection).
In another preferred embodiment of the present invention, the step 142 may include: from the beginning of a video stream or sequence of images, successive N-frame images are selected, which will be used to construct a background model. A blank image or matrix is created that is the same size as the selected N frames of images for storing the background model. For each pixel position (x, y) in the image, a traversal is performed, where x represents the abscissa (column) and y represents the ordinate (row). For each pixel position (x, y), an accumulation variable sum and a counter count are initialized. The first N frames of images are traversed, the pixel values are taken at the same pixel position (x, y) of each frame and accumulated into sum, while count is incremented by 1. After traversing the N frames, the sum is divided by the count to obtain an average value for the pixel location, which represents the color or luminance value that the pixel location should have in the absence of dynamic object interference. The calculated average is assigned to the corresponding pixel position (x, y) in the background model. The above steps are repeated until all pixel positions in the background model are assigned.
In another preferred embodiment of the present invention, the step 143 may include: a new image is captured or acquired, which will be the image that is currently being processed, ensuring that there is already a background model built according to step 142. A blank image or matrix of the same size as the current image is created as a foreground mask. Each pixel position (x, y) of the current image is traversed. For each pixel position (x, y) in the current image, its pixel value is fetched and the pixel value of the corresponding position is found in the background model. Calculating an absolute difference value of the two; a threshold is preset and if the absolute difference is greater than the threshold, the pixel is considered to be a foreground pixel. In the foreground mask, the pixel position (x, y) determined to be foreground is marked with a specific value (for example, set to 255 represents white). The above steps are repeated until all pixels in the current image are processed. The resulting foreground mask displays the dynamic portion or foreground object in the image.
In another preferred embodiment of the present invention, the step 144 may include: the expansion process generally uses a structural element or expansion core, where a3 x 3 rectangle is used as the expansion core. This kernel will slide over the image and convolve with the foreground pixels, including in particular: a preset number of iterations is set which determines the intensity of the dilation process, and for each foreground pixel in the foreground mask (the pixel marked in step 143) it is convolved with a3 x 3 rectangular kernel, if there is at least one foreground pixel in the area covered by the kernel, the central position of the kernel (i.e. the pixel position currently being processed) will be set to the foreground pixel, in such a way that the foreground region will dilate around, filling the small hole and connecting adjacent foreground pixels. The expansion processing process is repeated until the preset iteration times are reached, and after each iteration, the foreground area is slightly expanded, which is helpful to fill in the fracture caused by shadow, reflection or motion blur, and after the expansion processing for a sufficient number of times, the expanded foreground area is obtained.
In another preferred embodiment of the present invention, the step 145 may include: performing connected region analysis on the expanded foreground region to find out all independent connected regions, wherein the connected regions refer to a group of interconnected pixels which share the same pixel value (here, foreground pixel value) and are adjacent to each other; for each connected region, its centroid is calculated, which is the average of all pixel coordinates. Let N pixels in the connected region, the coordinates of each pixel be (x i,yi), then the coordinates of centroid C are:
I.e., the x-coordinate of the centroid is the average of the x-coordinates of all pixels in the region, the y-coordinate is the average of the y-coordinates of all pixels in the region, i represents the index, Representing the i-th x-coordinate,Representing the ith y coordinate. In the context of traffic monitoring, the centroid of each connected region may be considered the location of the vehicle in the current frame. By tracking the movement of these centroids over time, the motion trajectories and behaviors of the vehicle can be analyzed. The above process is repeated until the centroids of all connected regions are calculated. Through the steps, the accurate detection and tracking of vehicles at the traffic intersection are realized.
In a preferred embodiment of the present invention, the step 15 may include:
Step 151, inputting a vehicle target image into the trained convolutional neural network model;
step 152, obtaining a feature map of a convolutional neural network model through forward propagation calculation;
Step 153, flattening the feature map to convert the multi-dimensional feature map into a one-dimensional feature vector.
In the embodiment of the invention, the convolutional neural network can automatically learn the hierarchical feature expression in the image, and the features which are useful for the subsequent classification or recognition task can be effectively extracted from the original image through the operations of a convolutional layer, a pooling layer and the like. This avoids the cumbersome process of manually designing and selecting features in conventional approaches. The flattening of the feature map into a one-dimensional feature vector is actually a way of data reduction and compression. The original image may contain a large amount of redundant information, and the feature map obtained after processing by the convolutional neural network is more compact and effective, so that the subsequent classifier or regressor is convenient to process. Because the convolutional neural network learns various transformations and local features of the image in the training process, the convolutional neural network has a certain invariance to transformations such as translation, rotation, scaling and the like of the image, and the robustness of the model to the changes of input data is enhanced. The whole process from the input of the original image to the output of the one-dimensional feature vector realizes the end-to-end automatic processing without excessive manual intervention, and improves the processing efficiency and accuracy.
In another preferred embodiment of the present invention, when applied specifically, the step 15 may further include:
Reading the original image file using an image processing library (e.g., openCV); determining a target size (e.g., 224 x 224 pixels) to which the image needs to be scaled according to the input requirements of the CNN model; initializing a blank canvas with the size of a target size, traversing each pixel point of a target image, and calculating the corresponding position in the original image for each target pixel point; and calculating the pixel value of the target pixel point according to the pixel values of four pixel points around the corresponding position in the original image by using a bilinear interpolation method, and filling the calculated pixel value into the corresponding position of the target image. Determining the size of an image area to be cut out according to the input requirement of the CNN model, randomly selecting a starting point coordinate in the original image, wherein the cutting area of the starting point from the starting point does not exceed the image boundary; and cutting out an image area with a fixed size from the original image according to the determined size of the cutting area and the randomly selected starting point. Reading an image of which the color space needs to be converted, and selecting an appropriate color space conversion function (for example, RGB to HSV or Lab) according to the requirement; and transmitting the image data to a color space conversion function to obtain converted image data. The image to be normalized is read and a range of image pixel values is determined (e.g., for an 8-bit image, the pixel value range is typically 0-255). Each pixel value is divided by the maximum value of the range of pixel values (e.g., 255), thereby normalizing the pixel values to a range of [0,1 ]. Reading an image to be subjected to data enhancement, randomly selecting a rotation angle according to preset probability distribution, and rotating the image; selecting horizontal overturning or vertical overturning according to requirements, and overturning the image; and randomly determining the translation direction and distance, and carrying out translation operation on the image.
Reading an image file by using an OpenCV image processing library, and loading the image file into a memory to serve as a multidimensional array; according to the input requirement of the CNN model, adjusting the dimension sequence of the array; converting the multidimensional array into a Tensor object by using a method provided by a deep learning framework; the data type of Tensor is converted to float32 and a batch dimension is added in the dimension to process multiple images.
A data loader provided using a deep learning framework efficiently loads data from disk and provides small batches of data for models; the structure of the CNN model is defined, including convolutional layers, activation functions (e.g., reLU), pooling layers, and the like. Initializing weight and bias parameters of a model; and inputting the preprocessed small batch of Tensor data into a CNN model, extracting features of the data through a convolution layer, downsampling through a pooling layer, reducing the space size of the data, and flattening (flat) the multidimensional feature map into one-dimensional feature vectors. The one-dimensional feature vector is input into the full-connection layer, the prediction probability of each category is obtained through an activation function (such as softmax), and the difference between the model prediction result and the real label is calculated by using a cross entropy loss function. According to the error calculated by the loss function, calculating the gradient by a back propagation algorithm, propagating the gradient back to the whole network, updating the weight and bias parameters of each layer, and updating the parameters of the model according to the calculated gradient by using a random gradient descent (SGD) algorithm. Training a model through a plurality of iterations of training, in each training, traversing the entire training set, training using small batches of data; after each training is finished, evaluating the performance of the model by using the verification set; and adjusting super parameters (such as learning rate, batch size and the like) according to the verification result to optimize the training process, and evaluating the final performance of the model by using an independent test set after training is completed.
In the embodiment of the invention, the image data can be loaded from the disk efficiently and converted into the Tensor format suitable for deep learning model processing by using the image processing library such as OpenCV and the data loader provided by the deep learning framework, and the processing mode not only improves the data loading speed, but also ensures that the preprocessing and enhancement of the data are more convenient and flexible. The invention provides a variety of image preprocessing operations including scaling, cropping, color space conversion, normalization, and data enhancement. The operations can effectively improve the generalization capability of the model, so that the model has better recognition performance on images with different sizes, colors, illumination and angles. By defining a CNN model structure that contains a convolution layer, an activation function, and a pooling layer, the invention is able to extract the valid features in the image and map these features to the final classification result through the full connection layer. The difference between the model prediction result and the real label is measured by using the cross entropy loss function, so that the performance of the model can be reflected more accurately. Meanwhile, the parameters of the model are updated through a back propagation algorithm and an SGD optimization algorithm, so that the model can be continuously learned and optimized in the training process, and the classification accuracy of the model is improved.
In another preferred embodiment of the present invention, the step 16 may include:
In step 161, for each vehicle target, feature extractors (such as SIFT algorithm) are used to extract feature vectors from the image, which capture key information such as appearance, shape, texture, etc. of the vehicle.
Step 162, for the vehicle targets detected in adjacent time points (e.g., t and t+1), calculates the similarity between their feature vectors, which can be calculated as:
Wherein, AndThe ith component of vectors V1 and V2, respectively, n is the dimension of the vector and R is the similarity.
Step 163, performing association matching on the vehicle targets in the adjacent time points based on the calculated similarity, specifically including:
The sets of vehicle targets detected at two adjacent time points are respectively noted SetA and SetB; the feature vector of each vehicle object and its corresponding similarity matrix S, where S [ i ] [ j ] represents the similarity of the ith vehicle object in SetA and the jth vehicle object in SetB. An n similarity matrix S is created where n is a larger number of vehicle objects in SetA and SetB (if the numbers are not equal, then additional parts of the matrix may be filled with a small similarity value, indicating that matches are not possible). The similarity matrix S is converted to a cost matrix C, which may be a negative number of the similarity, i.e., cj= -S i j.
At initialization, all rows and columns are marked as uncovered, an unlabeled row i is found in the cost matrix C, then an unlabeled zero element C [ i ] [ j ] is found from the row, and the row and column are marked, and this step is repeated until all rows are marked or no more zero elements are found, and if all rows have been marked and all zero elements have been found, a scribe-and-overlay step is performed, which involves overlaying all zero elements with the least lines. If the number of lines required is equal to the number of vehicle targets, an optimal solution has been found. Otherwise, continuing the next step. Find the minimum in the uncovered elements and subtract this minimum from all uncovered elements while adding this minimum at the intersection of the covered row and covered column. Thus, under the condition of keeping the optimal solution unchanged, conditions can be created for searching zero elements for the next round. The above steps are repeated until an optimal allocation is found. When the optimal allocation is found, the vehicle targets in SetA may be associated with the vehicle targets in SetB according to the allocation results. That is, if C [ i ] [ j ] is a zero element and is selected as part of the optimal allocation, then the ith vehicle object in SetA is associated with the jth vehicle object in SetB. An association matching list in which each element is a tuple (vehicleA, vehicleB) representing vehicleA in SetA and vehicleB in SetB are associated.
A similarity threshold is set, and only when the similarity of two targets exceeds the threshold, the targets are considered to be continuous observations of the same vehicle, and the targets of the same vehicle at continuous time points can be connected through the association matching process to form a motion track of the vehicle, and the data form vehicle tracking data comprising information such as position, speed, acceleration and the like of each vehicle.
In the embodiment of the invention, the feature extractor (such as SIFT algorithm) is used for extracting the feature vector from the image, and the method can capture the key information such as the appearance, the shape, the texture and the like of the vehicle, so that the vehicle can be more accurately identified and tracked, the situation of error tracking and target loss can be reduced, and the tracking accuracy can be improved. By calculating the similarity between the feature vectors and performing association matching based on the similarity, the method can efficiently process a large amount of vehicle target data, and in addition, the optimal distribution is performed by using the Hungary algorithm, so that the most efficient matching among a plurality of targets can be ensured, and the efficiency and accuracy of data processing are improved. The method has stronger robustness to the conditions of shielding, deformation, illumination change and the like of the vehicle target. Because the feature extractor can extract stable features, the similarity calculation and correlation matching process can effectively process the complex situations, and the tracking stability and reliability are ensured.
In another preferred embodiment of the present invention, the step 17 may include:
Step 171, extracting the number of vehicles passing through each intersection in a unit time (such as 5 minutes and 10 minutes) from the vehicle tracking data, and recording the running state of the current signal lamp, including the duration of the red light, the green light and the yellow light. And counting the number of vehicles passing through the intersection in the green light period, calculating the average number of vehicles passing through in unit time based on the counted number, and calculating the traffic flow in each period by combining the period of the signal light and the green signal ratio (the ratio of the green light time to the signal light period).
Step 172, calculating the average speed of the vehicle passing through the intersection by using the vehicle tracking data, analyzing the speed distribution, identifying the common driving speed range, and calculating the average acceleration and deceleration of the vehicle by analyzing the vehicle speed change at the continuous time points, wherein the data can help to identify the area and the time period of the traffic jam.
Step 173, analyzing the vehicle tracking data, determining the main driving path and steering preference of the vehicle, and identifying the road section and intersection with larger traffic flow by combining the map data. During the red light of the signal lamp, the number of vehicles reaching the intersection and waiting for parking is recorded, and the total number of vehicles accumulated during the red light is estimated according to the arrival rate of the vehicles and the duration of the red light. Assuming that each vehicle occupies a certain road length (estimated according to the vehicle type and average vehicle length), the total number of vehicles accumulated during the red light is multiplied by the road length occupied by each vehicle to obtain the total queuing length. After the green light is turned on, the speed and number of vehicles passing through the intersection are observed and recorded. From these data, estimating the speed and time of queue dissipation during green light; and taking the fluctuation and randomness of the traffic flow into consideration, and dynamically adjusting according to the real-time data.
In the embodiment of the invention, the traffic flow in each period can be predicted more accurately by counting and analyzing the number of vehicles passing through the intersection and the running state of the signal lamp in unit time in detail, so that traffic management departments can know traffic conditions better and make reasonable scheduling and planning. Through accurate prediction of traffic flow, the timing scheme of the signal lamp can be optimized, and the traffic efficiency of the intersection is improved. For example, green light time is increased during peak traffic periods to reduce vehicle queuing and congestion. By analyzing the speed, acceleration and deceleration of the vehicle, the area and the time period of traffic jam can be identified, so that traffic management departments can find and solve traffic bottlenecks in time, and the road use efficiency is improved. By analyzing the vehicle driving path and steering preference, more reasonable driving advice can be provided for the driver, and unnecessary detours and congestion are reduced. The total number of vehicles accumulated in the red light period and the queuing dissipation speed and time in the green light period are monitored and calculated in real time, so that the queuing length of the vehicles can be accurately estimated, traffic management departments can know traffic conditions of intersections in time, and corresponding measures are taken for guiding.
In another preferred embodiment of the present invention, the step 18 may include:
Step 181, through the estimation in step 17, real-time data of the queuing distance of the vehicle is already obtained; traffic flow parameters including traffic flow, speed, type of vehicle, etc. are collected and analyzed, and these parameters may be obtained in real time by means of traffic monitoring systems, induction coils, cameras, etc. According to the queuing distance of the vehicle and the real-time traffic flow, whether the current green time can meet the traffic requirement is calculated, if the queuing distance is too long or the traffic flow is large, the green time needs to be increased, and the green time needs to be increased based on the real-time change of the traffic flow. For example, if an increase in traffic flow of 20% is detected, a corresponding increase in green time of 20% may be considered. Meanwhile, the timing of signal lamps of adjacent intersections is also considered, so that the congestion of other intersections caused by overlong green light time of one intersection is avoided.
In step 182, the yellow light time is used to warn the driver that the signal light is about to be changed, the time length should be set according to the actual situation, so as to ensure that the driver has enough time to react, and the red light time should be correspondingly adjusted according to the green light time and the traffic flow. For example, if the green time increases, the red time may need to be reduced accordingly to keep the signal period stable. A signal cycle refers to the time interval from the start of one green light to the start of the next green light. The period adjustment should be based on the overall traffic flow and queuing situation. If the traffic flow of a plurality of intersections is large, the signal lamp period can be shortened to improve the traffic efficiency. But this needs to be done with safety ensured. After the signal lamp working parameters are adjusted, the implementation effect needs to be evaluated. This can be achieved by comparing the traffic flow, queuing length, traffic speed, etc. before and after adjustment. If the evaluation result shows that the adjustment effect is poor, or a new problem (such as the aggravation of the congestion at a certain intersection) occurs, the working parameters of the signal lamp need to be further adjusted, or other traffic management measures are considered to be adopted. Through the steps, the working parameters of the traffic signal lamp can be dynamically adjusted according to the queuing distance of the vehicle and the traffic flow parameters, so that the traffic efficiency and the safety are improved.
As shown in fig. 2, an embodiment of the present invention further provides a remote visual dynamic indication control device 20, including:
An acquisition module 21, configured to construct a traffic signal simulation environment; receiving working state data sent by a traffic signal lamp system; processing the working state data to obtain the running state of the current signal lamp; acquiring vehicle data and a vehicle target from a simulation environment; extracting feature vectors from the vehicle target using a feature extractor;
The processing module 22 is configured to calculate a similarity of the vehicle target in the adjacent time points in the simulation environment according to the feature vector, and perform association matching to obtain vehicle tracking data; estimating traffic flow, driving modes and vehicle queuing distances of all intersections according to the vehicle tracking data and the running state of the current signal lamp; and dynamically adjusting the working parameters of the traffic signal lamp according to the queuing distance of the vehicle and the traffic flow parameters.
Optionally, constructing a traffic signal simulation environment includes:
Determining the requirements and targets of a simulation environment;
obtaining a topological graph of the traffic network according to the requirements and the targets of the simulation environment;
Initializing a particle swarm, wherein each particle represents a signal lamp timing scheme;
According to traffic flow, delay time and parking times, determining and evaluating an adaptability function of the signal lamp timing scheme, wherein a calculation formula of the adaptability function is as follows:
Wherein, the method comprises the steps of, wherein, A value representing the fitness function is presented,Representing the total delay time of all vehicles during simulation, the delay time refers to the difference between the actual time the vehicle waits at the signal lamp and the time required by the ideal unobstructed crossing,Indicating the maximum total delay time to be achieved,Indicating the total number of stops of all vehicles during the simulation,Indicating the maximum total number of stops that will be made,Indicating the total time the vehicle is in a congested state in the traffic network, i.e. the sum of the times that there is a vehicle waiting in line to pass the traffic light,The total time of the simulation is represented,The sum of the effective green time of all the signal lamps during the simulation, i.e. the time when the signal lamp lights up to green and the vehicle passes,Representing the sum of green time of all signal lamps during simulation, w 1,w2,w3 and w 4 are weight coefficients of various indexes respectively, and w 1+w2+w3+w4 =1;
continuously iterating, updating the speed and the position of the particles to obtain a final signal lamp timing scheme, stopping iterating when a termination condition is reached, and outputting the final signal lamp timing scheme;
and establishing a traffic signal lamp simulation environment according to the topology diagram of the traffic network and the final signal lamp timing scheme.
Optionally, the topological graph of the traffic network comprises intersections, road sections and connection relations among the road sections; the signal lamp timing scheme comprises signal period and green light time parameters of each intersection.
Optionally, the processing the working state data to obtain the current running state of the signal lamp includes:
Sequencing the working state data according to the time stamp t i, traversing the sequenced working state data, and calculating the difference value of the time stamps of two adjacent data points for any two adjacent data points, wherein the calculation formula of the difference value of the time stamps of the two adjacent data points is as follows:
Wherein, the method comprises the steps of, wherein, Representing the difference in time stamps of two adjacent data points,Representing the global weighting factor(s),The local weighting factor is represented as such,The time correction factor is represented as such,Represents the average window size for computing the timestamp difference smoothing term,Representing the weighting factor of the additional correction term, m representing the average window size used to calculate the additional correction term,An additional correction term representing at a point in time k, i representing the current data point, j and k being indexes for traversing past data points, a smoothing term and an additional correction term for calculating a time stamp difference, respectively;
If the difference value of the time stamps of the two adjacent data points is smaller than a preset threshold value, the two adjacent data points are close, and any data point is removed to obtain processed working state data;
Analyzing the processed working state data according to a data protocol of the traffic signal lamp system, and extracting key information related to the running state of the signal lamp;
locating an identifier representing the current state of the signal lamp from the key information;
The identifier is mapped to a specific signal lamp state to obtain the current signal lamp operating state.
Optionally, acquiring the vehicle data and the vehicle target from the simulation environment includes:
acquiring image data of a vehicle through a data capturing point in a simulation environment;
Constructing a background model through the previous N frames of images, calculating the average value of the position pixel values in the previous N frames of images for each pixel position, and taking the average value of the position pixel values as the value of the corresponding pixel in the background model;
for each newly captured image of each frame, traversing each pixel in the image, calculating the difference value between each pixel in the current frame and the corresponding pixel in the background model, and marking the pixel as a foreground pixel if the difference value is larger than a preset threshold value;
performing expansion processing on foreground pixels by using a 3×3 rectangle until the preset iteration times are reached, so as to obtain an expanded foreground region;
and analyzing each independent connected region according to the expanded foreground region to determine the mass center of each connected region, and taking the mass center as the position of the vehicle in the current frame.
Optionally, extracting the feature vector from the vehicle target using the feature extractor includes:
inputting a vehicle target image into the trained convolutional neural network model;
obtaining a characteristic diagram of a convolutional neural network model through forward propagation calculation;
The feature map is flattened to convert the multi-dimensional feature map into one-dimensional feature vectors.
Optionally, the image data of the vehicle includes a current time stamp, a position of the vehicle in the simulation environment, and a speed of the vehicle.
It should be noted that the apparatus is an apparatus corresponding to the above method, and all implementation manners in the above method embodiment are applicable to this embodiment, so that the same technical effects can be achieved.
Embodiments of the present invention also provide a computing device comprising: a processor, a memory storing a computer program which, when executed by the processor, performs the method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Furthermore, it should be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. Also, the steps of performing the series of processes described above may naturally be performed in chronological order in the order of description, but are not necessarily performed in chronological order, and some steps may be performed in parallel or independently of each other. It will be appreciated by those of ordinary skill in the art that all or any of the steps or components of the methods and apparatus of the present invention may be implemented in hardware, firmware, software, or any combination thereof in any computing device (including processors, storage media, etc.) or network of computing devices, as would be apparent to one of ordinary skill in the art upon reading the present specification.
The object of the invention can thus also be achieved by running a program or a set of programs on any computing device. The computing device may be a well-known general purpose device. The object of the invention can thus also be achieved by merely providing a program product containing program code for implementing said method or apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is apparent that the storage medium may be any known storage medium or any storage medium developed in the future. It should also be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. The steps of executing the series of processes may naturally be executed in chronological order in the order described, but are not necessarily executed in chronological order. Some steps may be performed in parallel or independently of each other.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (5)

1. A remote visual dynamic indication control method, characterized in that the method comprises:
constructing a traffic signal lamp simulation environment;
Receiving working state data sent by a traffic signal lamp system;
processing the working state data to obtain the running state of the current signal lamp;
Acquiring vehicle data and a vehicle target from a simulation environment;
Extracting feature vectors from the vehicle target using a feature extractor;
According to the feature vector, calculating the similarity of the vehicle targets in adjacent time points in the simulation environment, and performing association matching to obtain vehicle tracking data;
Estimating traffic flow, driving modes and vehicle queuing distances of all intersections according to the vehicle tracking data and the running state of the current signal lamp;
Dynamically adjusting working parameters of traffic signal lamps according to the queuing distance of vehicles and traffic flow parameters;
Constructing a traffic signal lamp simulation environment, comprising:
Determining the requirements and targets of a simulation environment;
obtaining a topological graph of the traffic network according to the requirements and the targets of the simulation environment;
Initializing a particle swarm, wherein each particle represents a signal lamp timing scheme;
According to traffic flow, delay time and parking times, determining and evaluating an adaptability function of the signal lamp timing scheme, wherein a calculation formula of the adaptability function is as follows:
Wherein, the method comprises the steps of, wherein, A value representing the fitness function is presented,Representing the total delay time of all vehicles during simulation, the delay time refers to the difference between the actual time the vehicle waits at the signal lamp and the time required by the ideal unobstructed crossing,Indicating the maximum total delay time to be achieved,Indicating the total number of stops of all vehicles during the simulation,Indicating the maximum total number of stops that will be made,Indicating the total time the vehicle is in a congested state in the traffic network, i.e. the sum of the times that there is a vehicle waiting in line to pass the traffic light,The total time of the simulation is represented,The sum of the effective green time of all the signal lamps during the simulation, i.e. the time when the signal lamp lights up to green and the vehicle passes,Representing the sum of green time of all signal lamps during simulation, w 1,w2,w3 and w 4 are weight coefficients of various indexes respectively, and w 1+w2+w3+w4 =1;
continuously iterating, updating the speed and the position of the particles to obtain a final signal lamp timing scheme, stopping iterating when a termination condition is reached, and outputting the final signal lamp timing scheme;
Establishing a traffic signal lamp simulation environment according to a topological graph of a traffic network and a final signal lamp timing scheme;
The topological graph of the traffic network comprises intersections, road sections and connection relations among the road sections; the signal lamp timing scheme comprises signal periods and green light time parameters of all intersections;
Processing the working state data to obtain the running state of the current signal lamp, including:
Sequencing the working state data according to the time stamp t i, traversing the sequenced working state data, and calculating the difference value of the time stamps of two adjacent data points for any two adjacent data points, wherein the calculation formula of the difference value of the time stamps of the two adjacent data points is as follows:
Wherein, the method comprises the steps of, wherein, Representing the difference in time stamps of two adjacent data points,Representing the global weighting factor(s),The local weighting factor is represented as such,The time correction factor is represented as such,Represents the average window size for computing the timestamp difference smoothing term,Representing the weighting factor of the additional correction term, m representing the average window size used to calculate the additional correction term,An additional correction term representing at a point in time k, i representing the current data point, j and k being indexes for traversing past data points, a smoothing term and an additional correction term for calculating a time stamp difference, respectively;
If the difference value of the time stamps of the two adjacent data points is smaller than a preset threshold value, the two adjacent data points are close, and any data point is removed to obtain processed working state data;
Analyzing the processed working state data according to a data protocol of the traffic signal lamp system, and extracting key information related to the running state of the signal lamp;
locating an identifier representing the current state of the signal lamp from the key information;
Mapping the identifier to a specific signal lamp state to obtain the running state of the current signal lamp;
Acquiring vehicle data and vehicle targets from a simulation environment, comprising:
acquiring image data of a vehicle through a data capturing point in a simulation environment;
Constructing a background model through the previous N frames of images, calculating the average value of the position pixel values in the previous N frames of images for each pixel position, and taking the average value of the position pixel values as the value of the corresponding pixel in the background model;
for each newly captured image of each frame, traversing each pixel in the image, calculating the difference value between each pixel in the current frame and the corresponding pixel in the background model, and marking the pixel as a foreground pixel if the difference value is larger than a preset threshold value;
performing expansion processing on foreground pixels by using a 3×3 rectangle until the preset iteration times are reached, so as to obtain an expanded foreground region;
Analyzing each independent communication area according to the expanded foreground area to determine the mass center of each communication area, and taking the mass center as the position of the vehicle in the current frame;
extracting feature vectors from a vehicle target using a feature extractor, comprising:
inputting a vehicle target image into the trained convolutional neural network model;
obtaining a characteristic diagram of a convolutional neural network model through forward propagation calculation;
flattening the feature map to convert the multi-dimensional feature map into one-dimensional feature vectors;
According to the feature vector, calculating the similarity of the vehicle targets in adjacent time points in the simulation environment, and performing association matching to obtain vehicle tracking data, wherein the method comprises the following steps:
Extracting feature vectors from the images using a feature extractor for each vehicle target;
For the vehicle targets detected in the adjacent time points, the similarity between the feature vectors is calculated, and for the two feature vectors V1 and V2, the similarity is calculated as:
Wherein, AndThe ith component of vectors V1 and V2, respectively, n is the dimension of the vector, and R is the similarity;
and carrying out association matching on the vehicle targets in the adjacent time points based on the calculated similarity.
2. The method of claim 1, wherein the image data of the vehicle includes a current time stamp, a location of the vehicle in the simulated environment, and a speed of the vehicle.
3. A remote visual dynamic indication control device, characterized by being applied in the method according to any one of claims 1 to 2, comprising:
The acquisition module is used for constructing a traffic signal lamp simulation environment; receiving working state data sent by a traffic signal lamp system; processing the working state data to obtain the running state of the current signal lamp; acquiring vehicle data and a vehicle target from a simulation environment; extracting feature vectors from the vehicle target using a feature extractor;
The processing module is used for calculating the similarity of the vehicle targets in adjacent time points in the simulation environment according to the feature vectors and carrying out association matching so as to obtain vehicle tracking data; estimating traffic flow, driving modes and vehicle queuing distances of all intersections according to the vehicle tracking data and the running state of the current signal lamp; and dynamically adjusting the working parameters of the traffic signal lamp according to the queuing distance of the vehicle and the traffic flow parameters.
4. A computing device, comprising:
One or more processors;
Storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1-2.
5. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program which, when executed by a processor, implements the method according to any of claims 1 to 2.
CN202410558345.2A 2024-05-08 2024-05-08 Remote visual dynamic indication control method and device Active CN118155434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410558345.2A CN118155434B (en) 2024-05-08 2024-05-08 Remote visual dynamic indication control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410558345.2A CN118155434B (en) 2024-05-08 2024-05-08 Remote visual dynamic indication control method and device

Publications (2)

Publication Number Publication Date
CN118155434A CN118155434A (en) 2024-06-07
CN118155434B true CN118155434B (en) 2024-07-23

Family

ID=91290381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410558345.2A Active CN118155434B (en) 2024-05-08 2024-05-08 Remote visual dynamic indication control method and device

Country Status (1)

Country Link
CN (1) CN118155434B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171408A (en) * 2022-07-08 2022-10-11 华侨大学 Traffic signal optimization control method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2916190B1 (en) * 2014-03-04 2019-05-08 Volvo Car Corporation Apparatus and method for prediction of time available for autonomous driving, in a vehicle having autonomous driving cap
CN108665714A (en) * 2017-09-28 2018-10-16 孟卫平 The general string control method of traffic signals and its system
CN117935562B (en) * 2024-03-22 2024-06-07 山东双百电子有限公司 Traffic light control method and system based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171408A (en) * 2022-07-08 2022-10-11 华侨大学 Traffic signal optimization control method

Also Published As

Publication number Publication date
CN118155434A (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN109214280B (en) Shop identification method and device based on street view, electronic equipment and storage medium
CN112232371B (en) American license plate recognition method based on YOLOv3 and text recognition
CN114170580A (en) Highway-oriented abnormal event detection method
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN110245577A (en) Target vehicle recognition methods, device and Vehicular real time monitoring system
CN117173913B (en) Traffic control method and system based on traffic flow analysis at different time periods
CN114049572A (en) Detection method for identifying small target
CN113706584A (en) Streetscape flow information acquisition method based on computer vision
CN112674998A (en) Blind person traffic intersection assisting method based on rapid deep neural network and mobile intelligent device
CN110909656B (en) Pedestrian detection method and system integrating radar and camera
CN117437382B (en) Updating method and system for data center component
CN117456482B (en) Abnormal event identification method and system for traffic monitoring scene
CN118155434B (en) Remote visual dynamic indication control method and device
CN116229330A (en) Method, system, electronic equipment and storage medium for determining video effective frames
CN116168213A (en) People flow data identification method and training method of people flow data identification model
CN111767904B (en) Traffic incident detection method, device, terminal and storage medium
CN115424243A (en) Parking stall number identification method, equipment and medium based on yolov5-shufflenetv2
CN113052118A (en) Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera
CN117116065B (en) Intelligent road traffic flow control method and system
CN117218613B (en) Vehicle snapshot recognition system and method
CN118135065B (en) Tunnel dynamic gray scale map generation method, system, storage medium and electronic equipment
CN118115934A (en) Dense pedestrian detection method and system
CN114299404A (en) Unmanned aerial vehicle detection method and system based on vehicle detection model
CN116682053A (en) Character interaction relation detection method and system based on monitoring scene
CN118097647A (en) License plate number shielding detection method and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant