CN117173896B - Visual traffic control method based on ARM technology system - Google Patents

Visual traffic control method based on ARM technology system Download PDF

Info

Publication number
CN117173896B
CN117173896B CN202311448866.4A CN202311448866A CN117173896B CN 117173896 B CN117173896 B CN 117173896B CN 202311448866 A CN202311448866 A CN 202311448866A CN 117173896 B CN117173896 B CN 117173896B
Authority
CN
China
Prior art keywords
image data
road
road image
arm
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311448866.4A
Other languages
Chinese (zh)
Other versions
CN117173896A (en
Inventor
马怀清
孙秀丽
杜潜
卢凌
王泽方
邓广军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shucheng Technology Co ltd
Shenzhen Metro Group Co ltd
Original Assignee
Shucheng Technology Co ltd
Shenzhen Metro Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shucheng Technology Co ltd, Shenzhen Metro Group Co ltd filed Critical Shucheng Technology Co ltd
Priority to CN202311448866.4A priority Critical patent/CN117173896B/en
Publication of CN117173896A publication Critical patent/CN117173896A/en
Application granted granted Critical
Publication of CN117173896B publication Critical patent/CN117173896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vision traffic control method based on an ARM technology system, which relates to the technical field of ARM, and comprises the following steps: step 1: constructing an ARM image sensor network; step 2: the cloud verification system groups the road image data after receiving the road image data; step 3: when the cloud verification system judges that the traffic condition is an accident, the traffic signal lamp is controlled so as to prevent other vehicles from entering the road; when the cloud verification system judges that the traffic condition is bad weather, sending a broadcast signal to other vehicles which are covered by a circumference range with a set value within a radius by taking a road as a center, and reminding the other vehicles that the road is bad weather; when the cloud verification system judges that the traffic condition is congestion, the traffic signal lamp is controlled to reduce the traffic flow entering the road. The invention realizes the automation and the intellectualization of road traffic, and has the advantages of high efficiency and high accuracy.

Description

Visual traffic control method based on ARM technology system
Technical Field
The invention relates to the technical field of ARM, in particular to a vision passing control method based on an ARM technical system.
Background
With the rapid development of Intelligent Transportation Systems (ITS) and the acceleration of the urban process, it has become urgent to improve road traffic management efficiency and ensure pedestrian and vehicle safety. As an important branch of intelligent traffic, a vision traffic control system has been widely studied and applied to manage traffic flow using video image processing and analysis techniques. However, there are a number of problems and challenges with existing vision traffic control techniques, and there is an urgent need for more efficient, intelligent solutions.
In the prior art, aiming at road traffic control, manual work is often relied on. This is because how to judge the road condition is a complicated problem, and travel control is performed for different road conditions, which requires a lot of work. However, for road traffic control, the control can be performed by controlling traffic lights, and for judging the condition of the road, the control can be summarized or abstracted to a certain extent, the travel control of the road traffic can be simplified, and the intellectualization and automation of the road traffic appearance control can be realized to a certain extent. During the traveling of the vehicle, the environmental images on both sides of the vehicle can often reflect whether the vehicle is traveling, that is, the state of the vehicle can be judged, and the weather condition can be judged by the brightness and the ambiguity of the environmental images on both sides of the vehicle. Based on the two principles, road traffic control can be realized to a certain extent.
Disclosure of Invention
The invention aims to provide a visual traffic control method based on an ARM technology system, which realizes the automation and the intellectualization of road traffic and has the advantages of high efficiency and high accuracy.
In order to solve the technical problems, the invention provides a vision passing control method based on an ARM technical system, which comprises the following steps:
step 1: constructing an ARM image sensor network; the ARM image sensor network comprises a plurality of ARM image sensors which work independently, and each ARM image sensor is arranged on each vehicle running in the target area; each ARM image sensor is arranged at the top of the vehicle, acquires road image data of target areas positioned at two sides of the top of the vehicle in the running process of the vehicle in real time, performs preliminary analysis and identification on the acquired road image data to judge whether the road is abnormal or not, and obtains a preliminary judgment result, when the preliminary judgment result is abnormal, each ARM image sensor sends abnormal signals to other ARM image sensors in a set radius, and if the ARM image sensor receives at least 2 abnormal signals, the ARM image sensor sends the road image data acquired by the ARM image sensor to a cloud verification system;
Step 2: after receiving the road image data, the cloud verification system groups the road image data, and specifically comprises the following steps: classifying road image data acquired by ARM image sensors at positions where the ARM image sensors are located and the Euclidean distance is smaller than a set value into a group, wherein the same group of road image data corresponds to one road; image recognition judgment is carried out on the road image data of the same group, and the traffic condition of the road is judged; the traffic conditions include: accidents, congestion and bad weather;
step 3: when the cloud verification system judges that the traffic condition is an accident, the traffic signal lamp is controlled so as to prevent other vehicles from entering the road; when the cloud verification system judges that the traffic condition is bad weather, sending a broadcast signal to other vehicles which are covered by a circumference range with a set value within a radius by taking a road as a center, and reminding the other vehicles that the road is bad weather; when the cloud verification system judges that the traffic condition is congestion, the traffic signal lamp is controlled to reduce the traffic flow entering the road.
Further, in the step 1, each ARM image sensor acquires road image data of a target area of a vehicle in which the ARM image sensor is located in real time during a traveling process, and performs preliminary analysis and recognition on the acquired road image data to determine whether an abnormality occurs in a road, and the method for obtaining a preliminary determination result includes: each ARM image sensor compares the road image data of the target area of the vehicle in the running process, which is acquired in real time, with the road image data of the target area of the vehicle in the running process, which is acquired at the previous moment, and calculates the difference percentage; if in continuous At each moment, if the calculated difference percentage is within the set judging range, judging that the road is abnormal, and taking the abnormal road as a preliminary judging result; wherein (1)>The set value is a positive integer, and the range of the set value is +.>
Further, the method for calculating the difference percentage comprises the following steps: carrying out two-dimensional Fourier transform on the road image data at the current moment and the road image data at the last moment to obtain respective corresponding frequency domain images; in the frequency domain, calculating the energy distribution of each frequency domain image by calculating the autocorrelation function of the frequency domain image; after Euclidean distance of energy distribution of the two frequency domain images is calculated, performing two-dimensional Fourier inverse transformation on the calculation result to obtain an inverse transformation result; performing surface integration on the inverse transformation result on an infinite two-dimensional plane to obtain a total difference; dividing the total difference by the sum of the energy of the two frequency domain images to obtain a difference percentage; the sum of the energies of the two frequency domain images is equal to the sum of the results of the surface integration of the autocorrelation function of the respective frequency domain on an infinite two-dimensional plane.
Further, in step 2, image recognition and judgment are performed on the road image data of the same group, and the method for judging the traffic condition of the road comprises the following steps:
Step 2.1: calculating the average time of the road image data of the same group; calculating the average brightness of the road image data of the same group; calculating the difference percentage of a preset brightness standard value corresponding to the average brightness and the average time to obtain a brightness difference percentage result;
step 2.2: calculating the ambiguity of the road image data of the same group to obtain an ambiguity result;
step 2.3: acquiring the difference percentages of vehicles corresponding to the road image data of the same group in a set time period, and calculating the standard deviation of the difference percentages; if the standard deviation of the difference percentages is in the set threshold range, and the standard deviation of the difference percentages is closer to the lower limit of the threshold range, the probability of displacement of the vehicle in the set time period is judged to be smaller, and the traffic condition of the road is judged to be an accident; if the standard deviation of the difference percentages is in the set threshold range, and the standard deviation of the difference percentages is closer to the upper limit of the threshold range, the greater the possibility that the vehicle is displaced in the set time period is judged, but the displacement distance in the set time period is smaller than the set value, and the traffic condition of the road is judged to be congestion; if the standard deviation of the difference percentage is out of the set threshold range and exceeds the upper limit of the set threshold range, judging that the vehicle is displaced within the set time period, and judging that the traffic condition of the road is not jammed or accident if the displacement distance is larger than the set value within the set time period;
Step 2.4: and judging whether severe weather occurs according to the brightness difference percentage result and the ambiguity result.
Further, the method for calculating the average brightness of the road image data of the same group includes: calculating the brightness of each road image data in the same group of road image data, and thenAveraging the brightness of all the road image data in the same group of road image data to obtain the average brightness of the road image data in the same group; the method for calculating the brightness of each road image data includes: decomposing the road image data by using a multi-scale method, and respectively estimating illumination components of the image on different scales; for each scale of illumination componentCalculate a weight->The method comprises the steps of carrying out a first treatment on the surface of the Combining all the dimensionally weighted illumination components to form a comprehensive illumination estimate; calculating local brightness based on the comprehensive illumination estimation; based on the local luminance, a global luminance is calculated, and the global luminance is used as the luminance of the road image data.
Further, the illumination component of the road image data at each scale is calculated using the following formula:
wherein,is in the scale->An illumination component at; />Is in the scale->A gaussian filter at;is road image data; / >Is the relevant scale parameter, is the standard deviation of the Gaussian filter; />Representing a convolution operation;position coordinates of each pixel in the road image data;
weighting ofCalculated using the following formula:
wherein,is the scale->Is>Weights of (2); />Is a normalization factor; />A frequency domain transform representing an image; />Is a gaussian weight parameter of the frequency domain difference.
Further, the integrated illumination estimate is calculated using the following formula:
wherein,estimating for the comprehensive illumination; the local brightness is calculated using the following formula:
wherein,is a global brightness constant, and is a set value; />Is a constant for enhancing contrast and is a set value; />Is a hyperbolic tangent function; />Is an average value of pixels of the road image data; />Is a parameter for controlling the brightness adjustment sensitivity; and calculating global brightness as the brightness of the road image data by using the following formula:
wherein,is global brightness; />The width of the road image data; />The height of the road image data;is a parameter for determining local and global brightness weights and is a set value; />Is a power exponent parameter used to control the brightness sensitivity.
Further, the method for calculating the ambiguity of the road image data of the same group to obtain the ambiguity result comprises the following steps: road image data to be input Road image data divided into small blocks, expressed as +.>Wherein->Is an index of the block; for each small block of road image data +.>Estimating its point spread function +.>The method comprises the steps of carrying out a first treatment on the surface of the For each small block of road image data +.>Inverse filtering is performed to estimate the blurred road image data +.>The method comprises the steps of carrying out a first treatment on the surface of the All estimated blurred road image data +.>Combining into an integral restored road image data +.>The method comprises the steps of carrying out a first treatment on the surface of the Calculating blurred road image data +.>And restore road image data->Differential road image data between->The method comprises the steps of carrying out a first treatment on the surface of the For each differential road image data +.>Calculating entropy->The method comprises the steps of carrying out a first treatment on the surface of the Difference of entropy->As a degree of blur of road image data; and calculating the average value of the fuzziness of all the road image data in the same group of road image data, and taking the average value as a fuzziness result.
Further, the following formula is used for each small block of road image dataEstimating its point spread function +.>
In coordinates for the point spread function>Numerical values at; />Or->Standard deviation of the point spread functions; />Determining the spatial distribution range of the point spread function in the horizontal direction; />Determining the spatial distribution range of the point spread function in the vertical direction; />And->Coordinates of the point spread function respectively, representing local spatial positions of the point spread function; / >And->Frequency parameters respectively being cosine functions; />Determining the periodicity of cosine wave; />Determining the direction of cosine wave; the following formula is used for each small block of road image data +.>Inverse filtering is performed to estimate the blurred road image data +.>
And->The coordinates of the point spread functions, respectively.
Further, the blurred road image data is calculated using the following formulaAnd restore road image data->Differential road image data between->
The method comprises the steps of carrying out a first treatment on the surface of the For each differential road image data +.>Calculating entropy->
Wherein,and->Difference image +.>Is the width and height of (2); />Is a pixel +.>The difference value of (2) is in the difference image->Normalized probability of>Is a positive number for avoiding the situation where the probability is zero.
The visual traffic control method based on the ARM technology system has the following beneficial effects: the invention analyzes and evaluates the definition of the image by adopting the advanced point spread function, and can effectively evaluate the blurring degree of each small block of road image data so as to carry out inverse filtering treatment on the blurring image. In addition, the method also effectively quantifies the definition and the ambiguity of the image by calculating the difference between the image data. The method not only improves the accuracy of image processing, but also ensures the high efficiency of the processing process because of using an ARM-based system, especially in intelligent traffic scenes which need to rapidly process a large amount of real-time image data. Through image processing and analysis technology, the invention not only can detect and classify objects on roads, but also can analyze traffic flow, detect abnormal behaviors or accidents and provide the information to a traffic management center in real time. The real-time intelligent analysis capability greatly improves the response speed and precautions of the traffic management system, thereby reducing accidents and improving the road use efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a visual traffic control method based on an ARM technology system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, the visual traffic control method based on the ARM technology system includes:
Step 1: constructing an ARM image sensor network; the ARM image sensor network comprises a plurality of ARM image sensors which work independently, and each ARM image sensor is arranged on each vehicle running in the target area; each ARM image sensor is arranged at the top of the vehicle, acquires road image data of target areas positioned at two sides of the top of the vehicle in the running process of the vehicle in real time, performs preliminary analysis and identification on the acquired road image data to judge whether the road is abnormal or not, and obtains a preliminary judgment result, when the preliminary judgment result is abnormal, each ARM image sensor sends abnormal signals to other ARM image sensors in a set radius, and if the ARM image sensor receives at least 2 abnormal signals, the ARM image sensor sends the road image data acquired by the ARM image sensor to a cloud verification system; ARM (AdvancedRISCMachines) technology is employed herein and generally refers to a processor based on the RISC (ReducedInstructionSetComputing) architecture. Due to its low power consumption, high performance, it is very suitable for use on mobile or embedded devices, which is particularly critical in vehicle-mounted systems, because it can process large amounts of image data in real time without burdening other electronic systems of the vehicle.
The ARM image sensor arranged on the top of each vehicle can capture road conditions in real time, and the layout ensures wide visual field, avoids shielding, and improves the accuracy of data and the reliability of a system.
When a sensor detects an anomaly (e.g., a road obstruction, sudden traffic stagnation, etc.), it communicates with nearby sensors. This "car-to-car" (V2V) communication increases the speed and accuracy of the system because it does not rely entirely on the processing of the central server.
Step 2: after receiving the road image data, the cloud verification system groups the road image data, and specifically comprises the following steps: classifying road image data acquired by ARM image sensors at positions where the ARM image sensors are located and the Euclidean distance is smaller than a set value into a group, wherein the same group of road image data corresponds to one road; image recognition judgment is carried out on the road image data of the same group, and the traffic condition of the road is judged; the traffic conditions include: accidents, congestion and bad weather; when the local sensor network detects a potential problem, the data is sent to a cloud verification system. The design utilizes the powerful processing capacity of cloud computing, and can rapidly analyze and verify a large amount of data. The cloud system groups data using Euclidean distance, which is a mathematical method used to quantify the straight line distance between two points. Here it is used to determine which data are relevant, i.e. which vehicles may face the same road conditions.
Step 3: when the cloud verification system judges that the traffic condition is an accident, the traffic signal lamp is controlled so as to prevent other vehicles from entering the road; when the cloud verification system judges that the traffic condition is bad weather, sending a broadcast signal to other vehicles which are covered by a circumference range with a set value within a radius by taking a road as a center, and reminding the other vehicles that the road is bad weather; when the cloud verification system judges that the traffic condition is congestion, the traffic signal lamp is controlled to reduce the traffic flow entering the road. According to analysis of the cloud system, different measures can be taken to respond to different traffic conditions. This flexibility is a great advantage of the system, making it more effective in alleviating various traffic problems.
For example, by controlling traffic lights to respond to an accident or congestion, this may reduce the flow of traffic into the problem area, or by broadcasting a system to alert the driver of bad weather, which helps prevent further accidents.
In practice, when a road is congested or an accident occurs, the speed of a vehicle traveling on the road is greatly reduced, so that the road image data captured by the automobile is less changed in a short time. When severe weather occurs, the brightness of the road image data is reduced, and the ambiguity is increased, so that whether severe weather occurs can be judged. Each ARM image sensor is arranged on the top of the vehicle, and can be used for discharging images on two sides of the vehicle without being blocked by other adjacent vehicles.
The ARM architecture is known for its design efficiency and is capable of operating at low voltages and low frequencies, which is particularly critical for battery powered on-board systems. Low power consumption means that the system can be operated for a long time without placing a significant burden on the battery of the vehicle, which is critical in long distance traveling. ARM processors provide sufficient computing power to handle complex image recognition and analysis tasks in real time. This is critical for acquiring road image data in real time and performing preliminary analysis and recognition to determine road conditions, as these tasks need to be completed in milliseconds to ensure driving safety.
Example 2: based on the above embodiment, in step 1, each ARM image sensor acquires road image data of a target area of a vehicle in which the ARM image sensor is located in real time during traveling, and performs preliminary analysis and recognition on the acquired road image data to determine whether an abnormality occurs in a road, and the method for obtaining a preliminary determination result includes: each ARM image sensor compares the road image data of the target area of the vehicle in the running process, which is acquired in real time, with the road image data of the target area of the vehicle in the running process, which is acquired at the previous moment, and calculates the difference percentage; if in continuous At each moment, if the calculated difference percentage is within the set judging range, judging that the road is abnormal, and taking the abnormal road as a preliminary judging result; wherein (1)>Is a set value, is a positive integer, and has a value range of +.>
The calculation of the percentage difference over M consecutive moments is based on a comparison of consecutive image frames. The basic assumption of this approach is that under normal road conditions, the image captured by the ARM image sensor on top of the vehicle will continuously change as the vehicle moves. These changes may be due to changes in the position of the vehicle, changes in surrounding vehicles, changes in road signs or road conditions, and the like. However, when some abnormal situation (such as traffic accident, road congestion or extreme weather) occurs, the change of these images may decrease or exhibit a specific pattern because the movement of the vehicle is slowed down or stopped, or because of visual difference due to weather effect.
Short-term image changes may be normal or occasional, such as rapid changes from shadows to sunlight, or short movements of the vehicle in front. However, if these changes remain consistent over consecutive M moments (e.g., M.gtoreq.3), then it is more likely that an actual roadway anomaly is indicated. By waiting for multiple moments, the system can distinguish between short sporadic events and persistent abnormal conditions, thereby reducing false alarms.
The continuous check of multiple moments provides additional data points that make the system more confident that the detected changes are not random or false positive. Multiple data points provide more context that helps verify the accuracy of the preliminary test results. Road conditions may change rapidly. Continuous detection can help the system adapt to these dynamic changes because it is not based on a single instantaneous snapshot, but rather on data trends over a period of time. For example, there may be only a small change in the relative position of the vehicle at the beginning of congestion formation. If only one moment is analyzed, such a change may not be detected. However, if multiple successive moments are analyzed and it is noted that such small changes persist, the system may identify the onset of congestion.
The setting of the "M" value provides flexibility allowing the system to be adjusted according to the particular application scenario or desired reaction rate. Higher values of M may be suitable for situations where a higher degree of certainty is required, while lower values of M may be suitable for situations where a faster response is required.
Similarly, the "discrimination range" can also be adjusted according to actual needs to adapt to different environmental conditions and anomaly types.
Example 3: on the basis of the above embodiment, the method for calculating the difference percentage includes: carrying out two-dimensional Fourier transform on the road image data at the current moment and the road image data at the last moment to obtain respective corresponding frequency domain images; in the frequency domain, calculating the energy distribution of each frequency domain image by calculating the autocorrelation function of the frequency domain image; after Euclidean distance of energy distribution of the two frequency domain images is calculated, performing two-dimensional Fourier inverse transformation on the calculation result to obtain an inverse transformation result; performing surface integration on the inverse transformation result on an infinite two-dimensional plane to obtain a total difference; dividing the total difference by the sum of the energy of the two frequency domain images to obtain a difference percentage; the sum of the energies of the two frequency domain images is equal to the sum of the results of the surface integration of the autocorrelation function of the respective frequency domain on an infinite two-dimensional plane.
In particular, continuously captured road image data should exhibit a degree of variation under normal driving conditions due to movement of the vehicle, relative position changes of other vehicles or pedestrians, road markings or environment, and the like. However, when a traffic accident or severe congestion occurs, the speed of the vehicle may be greatly slowed or stopped. In these cases, the continuous image data may show very little variation because the surrounding scene has little variation.
As previously described, the system calculates the percent difference by comparing successive road image data. In particular, it compares the energy distribution of the images, which is a measure of the frequency information obtained by fourier transformation. If the difference between consecutive images is small (i.e. the percentage difference is low), this may mean that the scene has little change, which may be due to the traffic being almost stopped. The discrimination range is a predetermined threshold for determining whether the difference percentage is small enough to indicate a possible abnormality. This threshold is determined based on experimental or historical data to reflect the degree of image change expected in normal traffic flow. If at consecutive M times (M is a predetermined positive integer, such as 3), the percentage difference continues to fall below this discrimination range, the system may consider this to be abnormal. This persistence is important because it helps to exclude occasional or brief scene changes.
This approach is effective because it takes advantage of one of the fundamental features of traffic anomalies (such as congestion and accidents): in these cases, the mobility of the vehicle is severely limited, resulting in a significant reduction in the change of road scene. By quantifying these changes and comparing them to preset thresholds, the system can automatically detect possible anomalies, triggering further responsive measures such as traffic signal control or alerting the driver.
In the prior art, speed sensors only provide information about the speed of the vehicle. If the speed sensor fails, or if the reading is inaccurate for some reason (such as bad weather conditions), the system may fail. In contrast, image data analysis does not rely on a single data point. Even if some of the image data is lost or unclear, the system can still understand the overall scene by analyzing the remaining image data.
Example 4: on the basis of the above embodiment, the method for performing image recognition and judgment for the same group of road image data in step 2, and judging the traffic condition of the road includes:
step 2.1: calculating the average time of the road image data of the same group; calculating the average brightness of the road image data of the same group; calculating the difference percentage of a preset brightness standard value corresponding to the average brightness and the average time to obtain a brightness difference percentage result;
step 2.2: calculating the ambiguity of the road image data of the same group to obtain an ambiguity result;
step 2.3: acquiring the difference percentages of vehicles corresponding to the road image data of the same group in a set time period, and calculating the standard deviation of the difference percentages; if the standard deviation of the difference percentages is in the set threshold range, and the standard deviation of the difference percentages is closer to the lower limit of the threshold range, the probability of displacement of the vehicle in the set time period is judged to be smaller, and the traffic condition of the road is judged to be an accident; if the standard deviation of the difference percentages is in the set threshold range, and the standard deviation of the difference percentages is closer to the upper limit of the threshold range, the greater the possibility that the vehicle is displaced in the set time period is judged, but the displacement distance in the set time period is smaller than the set value, and the traffic condition of the road is judged to be congestion; if the standard deviation of the difference percentage is out of the set threshold range and exceeds the upper limit of the set threshold range, judging that the vehicle is displaced within the set time period, and judging that the traffic condition of the road is not jammed or accident if the displacement distance is larger than the set value within the set time period;
Step 2.4: and judging whether severe weather occurs according to the brightness difference percentage result and the ambiguity result.
In particular, in severe weather conditions, such as heavy fog, heavy rain, snow, etc., the line of sight of the camera may be severely disturbed, resulting in blurred images. This blurring is not caused by the focal length problem of the camera, but is a physical vision obstruction. Raindrops or snowflakes can also cause blurring of the image, which is a direct effect of another weather condition. Weather conditions severely affect lighting conditions. For example, in overcast days or storm weather, the brightness of the sky may be greatly reduced. In contrast, sunlight is intense and may cause overexposure of the image. Certain weather phenomena, such as thunderstorms, may cause rapid changes in light conditions, which are also reflected in the brightness of the image. By analyzing these image characteristics, the system can infer whether the current weather conditions negatively affect the visual conditions. For example, if the image suddenly becomes very blurred and the brightness drops, the system may determine that there is currently a heavy fog or heavy rain.
Example 5: on the basis of the above embodiment, the method of calculating the average luminance of the road image data of the same group includes: calculating the brightness of each road image data in the same group of road image data, and averaging the brightness of all road image data in the same group of road image data to obtain the average brightness of the same group of road image data; the method for calculating the brightness of each road image data includes: decomposing the road image data by using a multi-scale method, and respectively estimating illumination components of the image on different scales; for each scale of illumination component Calculate a weight->The method comprises the steps of carrying out a first treatment on the surface of the Combining all the dimensionally weighted illumination components to form a comprehensive illumination estimate; calculating local brightness based on the comprehensive illumination estimation; based on the local luminance, a global luminance is calculated, and the global luminance is used as the luminance of the road image data.
In particular, multi-scale decomposition is an image processing technique that can decompose an image into features captured at different scales or resolutions. For luminance estimation, this means that different parts of the image (e.g. shadows, highlights, etc.) can be viewed independently and a more accurate luminance calculation is made. The multi-scale approach helps capture details of the image, which may be ignored at a single scale. This is important for understanding complex lighting conditions such as intense shadows and bright spots due to headlights, street lamps or natural light of the vehicle.
Not all scales contribute in the same way to the final luminance estimate. Some dimensions may be more focused on details, while other dimensions may be more focused on overall brightness. By calculating weights for the illumination components of each scale, the system can more finely adjust which portions contribute more to the overall brightness. The weights may be based on various factors such as the degree of variation of the illumination component on a particular scale, contrast, or other image characteristics. By combining all of the dimensionally weighted illumination components, the system is able to form a more comprehensive illumination estimate. This means that it does not only take into account an aspect of the image (e.g. bright areas or dark areas), but provides an overall image brightness profile. Local luminance may refer to the luminance of a particular region in an image, while global luminance is the average luminance of the entire image. By evaluating the local luminance first and then calculating the global luminance based on this information, the system can more accurately understand the luminance of the image, especially in the case of uneven illumination.
Example 6: on the basis of the above embodiment, the illumination component of the road image data at each scale is calculated using the following formula:
wherein,is in the scale->An illumination component at; />Is in the scale->A gaussian filter at;is road image data; />Is the relevant scale parameter, is the standard deviation of the Gaussian filter; />Representing a convolution operation;position coordinates of each pixel in the road image data;
weighting ofCalculated using the following formula:
wherein,is the scale->Is>Weights of (2); />Is a normalization factor; />A frequency domain transform representing an image; />Is a gaussian weight parameter of the frequency domain difference.
In particular, the method comprises the steps of,representing the fourier transform, a technique for analyzing signals in the frequency domain. The image is transformed from a spatial domain representation to a frequency domain representation by fourier transformation, in which each point represents the amplitude and phase of a different frequency component in the image. This is very helpful for analyzing structural information (e.g., edges, textures, etc.) of the image, as such information tends to be related to components of a particular frequency.
Representing the energy (i.e., the square of the norm) of the frequency domain image. In signal processing, the norm of a signal is typically related to the "energy" of the signal. Here it is used to measure the energy of the frequency domain representation of the illumination component at a particular scale. Is a Gaussian function in which +.>Is a parameter controlling the width of the gaussian. The function value decreases rapidly with increasing parameters (here the energy of the frequency domain image), which means that the corresponding weights +.>The energy of the image will be smaller in the frequency domain (i.e. the image is hereSmoother in scale). />Is a normalization factor that ensures that the sum of the weights on all scales is 1. This is a conventional probability normalization step that ensures that the weights form a probability distribution.
In summary, the weight calculation formula aims to assign a weight to each illumination component of a scale, which weight is based on the complexity (represented by the frequency domain energy) of the image structure information at that scale. If a frequency domain image of a scale has higher energy (meaning that the scale has more structural information, such as edges or textures), it will be given lower weight; conversely, if the energy of the frequency domain image is lower (meaning that the image is smoother at that scale), then higher weights will be given. The weight distribution mode can balance the contribution of the illumination component on each scale, so that more accurate illumination estimation is obtained.
A gaussian filter is a filter commonly used for image blurring and noise reduction, which is implemented by convolution operation with an image. In this scenario, gaussian filtering helps to simulate human eyes' perception of illumination at different scales, and different levels of luminance information can be extracted from the image. In image processing, convolution is typically used to apply a filter. In this case it is used to apply a gaussian filter to the input image, resulting in illumination components at each scale.
Example 7: on the basis of the above embodiment, the integrated illumination estimate is calculated using the following formula:
wherein,estimating for the comprehensive illumination; the local brightness is calculated using the following formula:
wherein,is a global brightness constant, and is a set value; />Is a constant for enhancing contrast and is a set value; />Is a hyperbolic tangent function; />Is an average value of pixels of the road image data; />Is a parameter for controlling the brightness adjustment sensitivity; and calculating global brightness as the brightness of the road image data by using the following formula:
wherein,is global brightness; />The width of the road image data; />The height of the road image data;is a parameter for determining local and global brightness weights and is a set value; />Is a power exponent parameter used to control the brightness sensitivity.
In particular, the method comprises the steps of,the difference between the luminance of the current pixel point and the average value of the luminance of the entire image is calculated. This difference is the basis for evaluating the local brightness, as it indicates whether a particular region is bright or dark relative to the brightness of the overall image. />The function is a nonlinear function which maps the brightness difference to a limited range +.>. This mapping process effectively compresses the range of luminance differences, ensuring that extreme luminance values do not have an excessive impact on the final local luminance assessment. This is particularly important to prevent outliers from affecting the overall evaluation. / >Here, a "threshold" is used, which determines how much brightness difference is required to be considered significant. In this way, it adjusts the sensitivity of the local luminance calculation to luminance variations. Multiplier->The direction of the luminance difference (i.e. brighter or darker) is not changed, but the magnitude of the luminance difference, i.e. the contribution to the local luminance, is adjusted. This reflects the effect of the difference in luminance of the different regions on the overall perceived luminance. />Representing a "baseline" luminance, i.e. a luminance level without any luminance differences. All local luminance calculations are based on this baseline luminance. By means of the formula, the local brightness of each pixel point in the image can be estimated, the relative difference between each region and the brightness of the whole image is reflected, and the nonlinearity and the contrast sensitivity of the brightness difference are considered. The method is particularly suitable for complex illumination conditions, and needs to accurately know the brightness levels of different areas in the imageThe scene is used.
In the formulaIs a trade-off expression which incorporates local brightnessAnd a reference global brightness->. Here, a->Is a weight parameter that determines the degree of contribution of each of the local luminance and the global reference luminance when calculating the global luminance. By adjusting- >It may be more preferable to consider the overall brightness level of the image (lower +.>Value), or more of the local brightness variation in the image (higher +.>Values). Whole expressionWeighting is performed by a nonlinear function (exponential function). Here->Is an exponential parameter that adjusts the extent to which the luminance value contributes to the final result. When->When the brightness of the whole brightness is higher, the brighter area can have larger contribution to the whole brightness; when->Darker areas will contribute more to the global brightness when this is the case. This helps to be dependent on the particularThe sensitivity of the brightness is adjusted by the application requirement. />Is a normalization factor, and ensures that the calculation result of the global brightness falls within a consistent range regardless of the size of the image. This is particularly important for analyzing image data of different resolutions and sizes. />Representing a double summation that accumulates luminance information over all pixels of the image. This ensures that the global luminance is based on information of all areas in the image, providing an overall, comprehensive luminance assessment. In general, this global luminance calculation formula provides a comprehensive assessment of the overall luminance level of an image by integrating the local luminance information of the image and by nonlinear weighting and normalization. The method is particularly important for brightness analysis and image processing under different illumination conditions, and can be applied to various scenes, such as image enhancement, computer vision, automatic driving and the like.
Example 8: on the basis of the above embodiment, the method for calculating the ambiguity of the road image data of the same group to obtain the ambiguity result includes: road image data to be inputRoad image data divided into small blocks, expressed as +.>Wherein->Is an index of the block; for each small block of road image data +.>Estimating its point spread function +.>The method comprises the steps of carrying out a first treatment on the surface of the For each small block of road image data +.>Inverse filtering is performed to estimate the blurred road image data +.>The method comprises the steps of carrying out a first treatment on the surface of the All estimated blurred road image data +.>Combining into an integral restored road image data +.>The method comprises the steps of carrying out a first treatment on the surface of the Calculating blurred road image data +.>And restore road image data->Differential road image data between->The method comprises the steps of carrying out a first treatment on the surface of the For each differential road image data +.>Calculating entropy->The method comprises the steps of carrying out a first treatment on the surface of the Difference of entropy->As a degree of blur of road image data; and calculating the average value of the fuzziness of all the road image data in the same group of road image data, and taking the average value as a fuzziness result.
Specifically, first, the method starts with inputting road image dataDivided into a plurality of small blocks, each small block being denoted +.>Here +.>Is an index of the block. This segmentation strategy allows the system to analyze different areas of the image independently, which is necessary because different parts of the image may have different blur characteristics due to camera focus, moving objects or different weather conditions. Next, for each image block +. >The system will estimate a Point Spread Function (PSF), noted +.>. The PSF is a function that describes how the imaging system responds to a single point light source, and is determined by the optical properties of the system (e.g., lens, aperture, and sensor characteristics) and any additional blurring factors (e.g., motion blur or atmospheric disturbances). In this context, the PSF provides a mathematical model describing the blur introduced during imaging, which is the core of the next inverse filtering step. Inverse filtering is an image processing technique that attempts to reverse the blurring effect by using an estimated PSF. In particular, it relates to the use of +.>For each image block->Processing to produce an estimated unblurred image block->. While this step theoretically attempts to recover the original, unblurred scene, in practice perfect inverse filtering is often not possible, especially when the PSF is unknown or only partially estimated. Thus, this produces a "best guess" image, where some blurring may still exist. Subsequently, these inverse filtered image blocks +.>Is recombined to form a complete, attempted restoration image >. This step is necessary because it allows the system to evaluate the overall sharpness of the entire scene, not just based on separate small areas. In practice, it is often more important for a driver or an autopilot system to understand an entire scene than to view a single local area. In order to quantify the restoration effect and finally evaluate the image blur, the original blurred image block +.>And corresponding restored image block->Differences between them. Such a difference is defined as some form of distance or difference between two image blocks, and may typically be based on pixel intensity values. Then, this difference image data +.>Is used to calculate the so-called "entropy", denoted +.>. Entropy is an information-bearing indicator that measures uncertainty or complexity of random data. Here it is used as an intuitive measure representing the "uncertainty" or "distortion level" of the image restoration. High entropy values indicate a large uncertainty or complexity, which in this example corresponds to a high ambiguity. Thus, by comparing the entropy of the original image and the restored image, the system can quantify the degree of blurring. Finally, by calculating the entropy difference of all image blocks +. >And obtaining the average value of the total ambiguity index of the whole image set. This final blur score provides a useful, easily understood indicator to the application that indicates the overall sharpness or blur of the image.
Example 9: on the basis of the above embodiment, the following formula is used for each small block of road image dataEstimating its point spread function +.>
;/>
In coordinates for the point spread function>Numerical values at; />Or->Standard deviation of the point spread functions; />Determining the spatial distribution range of the point spread function in the horizontal direction; />Determining the spatial distribution range of the point spread function in the vertical direction; />And->Coordinates of the point spread function respectively, representing local spatial positions of the point spread function; />And->Frequency parameters of cosine functions respectively; />Determining the periodicity of cosine wave;determining the direction of cosine wave; the following formula is used for each small block of road image data +.>Inverse filtering is performed to estimate the blurred road image data +.>
And->The coordinates of the point spread functions, respectively.
Specifically, the mathematical form of the PSF used in this embodiment is the product of a two-dimensional Gaussian function and a two-dimensional cosine function. This option takes into account two main factors in the camera imaging process: firstly, the blur caused by the optical system of the camera, and secondly, the possible motion blur. The Gaussian portion captures the blur due to natural diffusion of light, the width of which is defined by And->These parameters determine the extent of diffusion of the point spread function in the horizontal and vertical directions, respectively. The cosine part may then represent a periodic component of the motion blur, which is commonOften due to movement of the camera or subject. Parameter->And->The frequency of the cosine wave is controlled, which affects the characteristics of the ambiguity, such as direction and speed.
The inverse filtering process is here embodied as a convolution operation which takes into account the effect of the PSF over the whole image block. Specifically, raw image dataIs PSF->Convolved, and then the process is normalized to ensure preservation of image brightness. This operation essentially attempts to "reverse" the effect of the PSF, i.e., to "deblur" the image by convolving with the estimated PSF.
However, it is worth noting that the inverse filtering is an idealized process, and in practice the original scene may not be fully restored. The inverse filtered image may still contain some distortion due to noise and other factors (e.g., imperfect estimation of the PSF). Nevertheless, it generally enables significant improvement of image quality and provides more reliable input for subsequent image analysis tasks (e.g., object detection or navigation).
The point spread function mentioned in example 9 is a special function, which considers two main elements: one is gaussian blur (blur due to optical imperfections) and the other is periodic blur due to motion (denoted cosine term). Both types of blur are very common in actual imaging situations, especially in scenes taken by moving vehicles or cameras.
Gaussian blur is a common type of blur, typically due to imperfections in the optical system (e.g. small particles in the lens, or irregular shapes of the lens, etc.). This blurring causes reasonThe desired point source appears as a diffuse point on the imaging plane, the shape of which follows a gaussian distribution. The width of this distribution is defined by the standard deviationAnd->These parameters reflect the extent of light diffusion across the sensor. Thus, a wider gaussian implies more severe blurring. In addition, the PSF in an embodiment also includes a cosine term that accounts for blurring due to relative motion (camera and/or objects in the scene). This blurring is not random but has a certain directionality and periodicity, which can be expressed mathematically by a cosine function.
Example 10: on the basis of the above embodiment, the blurred road image data is calculated using the following formulaAnd restore road image data->Differential road image data between->
The method comprises the steps of carrying out a first treatment on the surface of the For each differential road image data +.>Calculating entropy->
Wherein,and->Difference image +.>Is the width and height of (2); />Is a pixel +.>The difference value of (2) is in the difference image->Normalized probability of>Is a positive number for avoiding the situation where the probability is zero.
Specifically, first, this embodiment considers the original blurred road image dataAnd restored image data obtained by some algorithmic processing (e.g., inverse filtering using a point spread function)>. The difference between the two images is calculated by squaring the difference between the two. This is done to get a non-negative value, emphasizing the existence of any differences and giving more weight to larger differences. The result is difference image dataWhich means in the same position +.>Square of the difference between pixels at that location. Entropy is a concept in information theory that is used to quantify the uncertainty or randomness of a system.Here it is used to quantify the uncertainty or complexity in the difference image data. The entropy is calculated by first determining the probability of each pixel value occurring. This is by normalizing the difference image Is realized by the pixel values of +.>This means at the position +.>The probability of occurrence of pixel values of (c). Then, entropy is calculated using a standard formula of entropy, wherein +.>Is a small positive number for avoiding the problem of zero defined in logarithmic operations, which ensures the stability of the calculation and the continuity of the values. This entropy value->Representing the complexity or uncertainty of the difference image data. In general, higher entropy represents higher complexity, which in this example may be associated with higher blur of the image. In this way, the method not only provides a way to quantify image blur, but also allows the effectiveness of the deblurring algorithm to be assessed by analyzing the differences between the original image and the processed image. If a deblurring algorithm is very efficient, then the entropy of the difference image is expected to be low, because the restored image +.>Should be very close to the original blurred image. Conversely, if the algorithm is not effective, the entropy will be high, indicating that there is a large uncertainty or discrepancy. />
The present invention has been described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (10)

1. The vision traffic control method based on the ARM technology system is characterized by comprising the following steps of:
step 1: constructing an ARM image sensor network; the ARM image sensor network comprises a plurality of ARM image sensors which work independently, and each ARM image sensor is arranged on each vehicle running in the target area; each ARM image sensor is arranged at the top of the vehicle, acquires road image data of target areas positioned at two sides of the top of the vehicle in the running process of the vehicle in real time, performs preliminary analysis and identification on the acquired road image data to judge whether the road is abnormal or not, and obtains a preliminary judgment result, when the preliminary judgment result is abnormal, each ARM image sensor sends abnormal signals to other ARM image sensors in a set radius, and if the ARM image sensor receives at least 2 abnormal signals, the ARM image sensor sends the road image data acquired by the ARM image sensor to a cloud verification system;
step 2: after receiving the road image data, the cloud verification system groups the road image data, and specifically comprises the following steps: classifying road image data acquired by ARM image sensors at positions where the ARM image sensors are located and the Euclidean distance is smaller than a set value into a group, wherein the same group of road image data corresponds to one road; image recognition judgment is carried out on the road image data of the same group, and the traffic condition of the road is judged; the traffic conditions include: accidents, congestion and bad weather;
Step 3: when the cloud verification system judges that the traffic condition is an accident, the traffic signal lamp is controlled so as to prevent other vehicles from entering the road; when the cloud verification system judges that the traffic condition is bad weather, sending a broadcast signal to other vehicles which are covered by a circumference range with a set value within a radius by taking a road as a center, and reminding the other vehicles that the road is bad weather; and when the cloud verification system judges that the traffic condition is congestion, controlling the traffic signal lamp so as to reduce the traffic flow entering the road.
2. The vision passing control method based on ARM technology system as set forth in claim 1, wherein in the step 1, each ARM image sensor acquires road image data of a target area of a vehicle in which the sensor is located in real time during traveling, and performs preliminary analysis and recognition on the acquired road image data to determine whether an abnormality occurs in a road, and the method for obtaining a preliminary determination result includes: each ARM image sensor compares the road image data of the target area of the vehicle in the running process, which is acquired in real time, with the road image data of the target area of the vehicle in the running process, which is acquired at the previous moment, and calculates the difference percentage; if in continuous At each moment, if the calculated difference percentage is within the set judging range, judging that the road is abnormal, and taking the abnormal road as a preliminary judging result; wherein (1)>Is a set value, is a positive integer, and has a value range of +.>
3. The vision traffic control method based on the ARM technology system as claimed in claim 2, wherein the method for calculating the difference percentage comprises: carrying out two-dimensional Fourier transform on the road image data at the current moment and the road image data at the last moment to obtain respective corresponding frequency domain images; in the frequency domain, calculating the energy distribution of each frequency domain image by calculating the autocorrelation function of the frequency domain image; after Euclidean distance of energy distribution of the two frequency domain images is calculated, performing two-dimensional Fourier inverse transformation on the calculation result to obtain an inverse transformation result; performing surface integration on the inverse transformation result on an infinite two-dimensional plane to obtain a total difference; dividing the total difference by the sum of the energy of the two frequency domain images to obtain a difference percentage; the sum of the energies of the two frequency domain images is equal to the sum of the results of the surface integration of the autocorrelation function of the respective frequency domain on an infinite two-dimensional plane.
4. The visual traffic control method based on ARM technology system as claimed in claim 3, wherein the method for judging the traffic condition of the road by performing image recognition judgment for the same group of road image data in the step 2 comprises the following steps:
Step 2.1: calculating the average time of the road image data of the same group; calculating the average brightness of the road image data of the same group; calculating the difference percentage of a preset brightness standard value corresponding to the average brightness and the average time to obtain a brightness difference percentage result;
step 2.2: calculating the ambiguity of the road image data of the same group to obtain an ambiguity result;
step 2.3: acquiring the difference percentages of vehicles corresponding to the road image data of the same group in a set time period, and calculating the standard deviation of the difference percentages; if the standard deviation of the difference percentages is in the set threshold range, and the standard deviation of the difference percentages is closer to the lower limit of the threshold range, the probability of displacement of the vehicle in the set time period is judged to be smaller, and the traffic condition of the road is judged to be an accident; if the standard deviation of the difference percentages is in the set threshold range, and the standard deviation of the difference percentages is closer to the upper limit of the threshold range, the greater the possibility that the vehicle is displaced in the set time period is judged, but the displacement distance in the set time period is smaller than the set value, and the traffic condition of the road is judged to be congestion; if the standard deviation of the difference percentage is out of the set threshold range and exceeds the upper limit of the set threshold range, judging that the vehicle is displaced within the set time period, and if the displacement distance is larger than the set value within the set time period, judging that the traffic condition of the road is not jammed or accident;
Step 2.4: and judging whether severe weather occurs according to the brightness difference percentage result and the ambiguity result.
5. The ARM technology system-based vision passing control method of claim 4, wherein the method of calculating the average brightness of the road image data of the same group comprises: calculating the brightness of each road image data in the same group of road image data, and averaging the brightness of all road image data in the same group of road image data to obtain the average brightness of the same group of road image data; the method for calculating the brightness of each road image data includes: decomposing the road image data by using a multi-scale method, and respectively estimating illumination components of the image on different scales; for each scale of illumination componentCalculate a weight->The method comprises the steps of carrying out a first treatment on the surface of the Combining all the dimensionally weighted illumination components to form a comprehensive illumination estimate; calculating local brightness based on the comprehensive illumination estimation; based on the local luminance, a global luminance is calculated, and the global luminance is used as the luminance of the road image data.
6. The ARM technology system-based vision passing control method of claim 5, wherein the illumination component of the road image data at each scale is calculated using the following formula:
Wherein,is in the scale->An illumination component at; />Is in the scale->A gaussian filter at; />Is road image data; />Is the relevant scale parameter, is the standard deviation of the Gaussian filter; />Representing a convolution operation; />Position coordinates of each pixel in the road image data;
weighting ofCalculated using the following formula:
wherein,is the scale->Is>Weights of (2); />Is a normalization factor; />A frequency domain transform representing an image; />Is a gaussian weight parameter of the frequency domain difference.
7. The ARM technology system-based vision traffic control method of claim 5, wherein the integrated illumination estimate is calculated using the following formula:
wherein,estimating for the comprehensive illumination; the local brightness is calculated using the following formula:
wherein,is a global brightness constant, and is a set value; />Is a constant for enhancing contrast and is a set value; />Is a hyperbolic tangent function; />Is an average value of pixels of the road image data; />Is a parameter for controlling the brightness adjustment sensitivity; and calculating global brightness as the brightness of the road image data by using the following formula:
wherein,is global brightness; />The width of the road image data; / >The height of the road image data; />Is a parameter for determining local and global brightness weights and is a set value; />Is a power exponent parameter used to control the brightness sensitivity.
8. The method for controlling visual traffic based on ARM technology system as defined in claim 7, wherein the method for calculating the ambiguity of the road image data of the same group to obtain the ambiguity result comprises: road image data to be inputRoad image data divided into small blocks, expressed as +.>Wherein->Is a blockIndexing; for each small block of road image data +.>Estimating its point spread function +.>The method comprises the steps of carrying out a first treatment on the surface of the For each small block of road image data +.>Inverse filtering is performed to estimate the blurred road image data +.>The method comprises the steps of carrying out a first treatment on the surface of the All estimated blurred road image data +.>Combining into an integral restored road image data +.>The method comprises the steps of carrying out a first treatment on the surface of the Calculating blurred road image data +.>And restore road image data->Differential road image data between->The method comprises the steps of carrying out a first treatment on the surface of the For each differential road image data +.>Calculating entropy->The method comprises the steps of carrying out a first treatment on the surface of the Difference of entropy->Modulo as road image dataPasting degree; and calculating the average value of the fuzziness of all the road image data in the same group of road image data, and taking the average value as a fuzziness result.
9. The ARM technology system-based vision passing control method of claim 8, wherein the following formula is used for each small block of road image dataEstimating its point spread function +.>
In coordinates for the point spread function>Numerical values at; />Or->Standard deviation of the point spread functions; />Determining the spatial distribution range of the point spread function in the horizontal direction; />Determining the spatial distribution range of the point spread function in the vertical direction; />And->Coordinates of the point spread function respectively, representing local spatial positions of the point spread function; />And->Frequency parameters of cosine functions respectively; />Determining the periodicity of cosine wave;determining the direction of cosine wave; the following formula is used for each small block of road image data +.>Inverse filtering is performed to estimate the blurred road image data +.>
And->The coordinates of the point spread functions, respectively.
10. The ARM technology system-based vision passing control method of claim 9, wherein the blurred road image data is calculated using the following formulaAnd restore road image data->Differential road image data between->
The method comprises the steps of carrying out a first treatment on the surface of the For each differential road image data +. >Calculating entropy->
Wherein,and->Difference image +.>Is the width and height of (2); />Is a pixel +.>The difference value of (2) is in the difference image->Is normalized by (A)Probability of transformation (I)>Is a positive number for avoiding the situation where the probability is zero.
CN202311448866.4A 2023-11-02 2023-11-02 Visual traffic control method based on ARM technology system Active CN117173896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311448866.4A CN117173896B (en) 2023-11-02 2023-11-02 Visual traffic control method based on ARM technology system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311448866.4A CN117173896B (en) 2023-11-02 2023-11-02 Visual traffic control method based on ARM technology system

Publications (2)

Publication Number Publication Date
CN117173896A CN117173896A (en) 2023-12-05
CN117173896B true CN117173896B (en) 2024-01-16

Family

ID=88945387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311448866.4A Active CN117173896B (en) 2023-11-02 2023-11-02 Visual traffic control method based on ARM technology system

Country Status (1)

Country Link
CN (1) CN117173896B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100119476A (en) * 2009-04-30 2010-11-09 (주) 서돌 전자통신 An outomatic sensing system for traffic accident and method thereof
CN102354449A (en) * 2011-10-09 2012-02-15 昆山市工业技术研究院有限责任公司 Internet of vehicles-based method for realizing image information sharing and device and system thereof
CN102568240A (en) * 2010-11-15 2012-07-11 株式会社电装 Traffic Information System, Traffic Information Acquisition Device And Traffic Information Supply Device
CN106412048A (en) * 2016-09-26 2017-02-15 北京东土科技股份有限公司 Information processing method and apparatus based on intelligent traffic cloud control system
CN107958605A (en) * 2017-12-25 2018-04-24 重庆冀繁科技发展有限公司 Road condition information acquisition method
CN108922188A (en) * 2018-07-24 2018-11-30 河北德冠隆电子科技有限公司 The four-dimensional outdoor scene traffic of radar tracking positioning perceives early warning monitoring management system
CN111311958A (en) * 2018-12-11 2020-06-19 上海博泰悦臻电子设备制造有限公司 Turning road condition reminding method and system based on V2X technology and V2X server
CN111681435A (en) * 2020-03-27 2020-09-18 北京世纪互联宽带数据中心有限公司 Traffic control method and device based on edge calculation, electronic equipment and storage medium
CN111915896A (en) * 2020-08-17 2020-11-10 重庆电子工程职业学院 Intelligent traffic system and method based on Internet of things
KR102275432B1 (en) * 2020-10-13 2021-07-08 한국클라우드컴퓨팅연구조합 Real-time road information-based content collection and processing generation method and sales and advertisement method using the same
CN113330495A (en) * 2019-01-24 2021-08-31 御眼视觉技术有限公司 Clustering event information for vehicle navigation
WO2021232387A1 (en) * 2020-05-22 2021-11-25 南京云创大数据科技股份有限公司 Multifunctional intelligent signal control system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017030348A1 (en) * 2015-08-14 2017-02-23 엘지전자 주식회사 Method for transmitting and receiving v2x message in wireless communication system, and an apparatus for same
CN108932855A (en) * 2017-05-22 2018-12-04 阿里巴巴集团控股有限公司 Road traffic control system, method and electronic equipment
CA3099840A1 (en) * 2018-05-16 2019-11-21 NoTraffic Ltd. System and method for using v2x and sensor data
KR101969064B1 (en) * 2018-10-24 2019-04-15 주식회사 블루시그널 Method of predicting road congestion based on deep learning and controlling signal and server performing the same

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100119476A (en) * 2009-04-30 2010-11-09 (주) 서돌 전자통신 An outomatic sensing system for traffic accident and method thereof
CN102568240A (en) * 2010-11-15 2012-07-11 株式会社电装 Traffic Information System, Traffic Information Acquisition Device And Traffic Information Supply Device
CN102354449A (en) * 2011-10-09 2012-02-15 昆山市工业技术研究院有限责任公司 Internet of vehicles-based method for realizing image information sharing and device and system thereof
CN106412048A (en) * 2016-09-26 2017-02-15 北京东土科技股份有限公司 Information processing method and apparatus based on intelligent traffic cloud control system
CN107958605A (en) * 2017-12-25 2018-04-24 重庆冀繁科技发展有限公司 Road condition information acquisition method
CN108922188A (en) * 2018-07-24 2018-11-30 河北德冠隆电子科技有限公司 The four-dimensional outdoor scene traffic of radar tracking positioning perceives early warning monitoring management system
CN111311958A (en) * 2018-12-11 2020-06-19 上海博泰悦臻电子设备制造有限公司 Turning road condition reminding method and system based on V2X technology and V2X server
CN113330495A (en) * 2019-01-24 2021-08-31 御眼视觉技术有限公司 Clustering event information for vehicle navigation
CN111681435A (en) * 2020-03-27 2020-09-18 北京世纪互联宽带数据中心有限公司 Traffic control method and device based on edge calculation, electronic equipment and storage medium
WO2021232387A1 (en) * 2020-05-22 2021-11-25 南京云创大数据科技股份有限公司 Multifunctional intelligent signal control system
CN111915896A (en) * 2020-08-17 2020-11-10 重庆电子工程职业学院 Intelligent traffic system and method based on Internet of things
KR102275432B1 (en) * 2020-10-13 2021-07-08 한국클라우드컴퓨팅연구조합 Real-time road information-based content collection and processing generation method and sales and advertisement method using the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视频实时传输在车辆自组网中的性能评估;周欢;徐守志;余钊;杨晓梅;;三峡大学学报(自然科学版)(第05期);第 73-77页 *

Also Published As

Publication number Publication date
CN117173896A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN107202983B (en) Automatic braking method and system based on image recognition and millimeter wave radar fusion
US9384401B2 (en) Method for fog detection
US7970178B2 (en) Visibility range estimation method and system
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
CN109703460B (en) Multi-camera complex scene self-adaptive vehicle collision early warning device and early warning method
US8908038B2 (en) Vehicle detection device and vehicle detection method
KR100459476B1 (en) Apparatus and method for queue length of vehicle to measure
CN110544211B (en) Method, system, terminal and storage medium for detecting lens attached object
US20100141806A1 (en) Moving Object Noise Elimination Processing Device and Moving Object Noise Elimination Processing Program
CN101286239A (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN107909012B (en) Real-time vehicle tracking detection method and device based on disparity map
CN111860120A (en) Automatic shielding detection method and device for vehicle-mounted camera
CN117094914B (en) Smart city road monitoring system based on computer vision
CN109919062A (en) A kind of road scene weather recognition methods based on characteristic quantity fusion
CN112287861A (en) Road information enhancement and driving early warning method based on night environment perception
CN107766847B (en) Lane line detection method and device
CN114119955A (en) Method and device for detecting potential dangerous target
JPH09282452A (en) Monitor
Chen et al. A novel lane departure warning system for improving road safety
CN111332306A (en) Traffic road perception auxiliary driving early warning device based on machine vision
CN117173896B (en) Visual traffic control method based on ARM technology system
JP7264428B2 (en) Road sign recognition device and its program
CN110544232A (en) detection system, terminal and storage medium for lens attached object
KR100801989B1 (en) Recognition system for registration number plate and pre-processor and method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant