CN117094914A - Smart city road monitoring system based on computer vision - Google Patents

Smart city road monitoring system based on computer vision Download PDF

Info

Publication number
CN117094914A
CN117094914A CN202311346643.7A CN202311346643A CN117094914A CN 117094914 A CN117094914 A CN 117094914A CN 202311346643 A CN202311346643 A CN 202311346643A CN 117094914 A CN117094914 A CN 117094914A
Authority
CN
China
Prior art keywords
pixel point
value
lane
taking
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311346643.7A
Other languages
Chinese (zh)
Other versions
CN117094914B (en
Inventor
曾二林
罗达祥
陈斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Shenchuang Photoelectric Technology Co ltd
Original Assignee
Guangdong Shenchuang Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Shenchuang Photoelectric Technology Co ltd filed Critical Guangdong Shenchuang Photoelectric Technology Co ltd
Priority to CN202311346643.7A priority Critical patent/CN117094914B/en
Publication of CN117094914A publication Critical patent/CN117094914A/en
Application granted granted Critical
Publication of CN117094914B publication Critical patent/CN117094914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of image data processing, and provides a smart city road monitoring system based on computer vision, which comprises: acquiring a fog-containing gray scale image; acquiring a lane potential pixel point set according to gray level co-occurrence matrixes in different directions; acquiring an edge difference coefficient of the pixel point according to symmetry characteristics of image information in a neighborhood window taken by the pixel point; obtaining a local kurtosis heterogeneity index according to texture differences of different positions in a central region where the pixel points are located and a search region where the pixel points are located; acquiring a vehicle judgment coefficient according to the local kurtosis heterogeneity index; acquiring the defogging window size according to the vehicle judgment coefficient and the depth value; and acquiring a dark channel defogging image corresponding to the foggy gray scale image according to the defogging window size. According to the invention, the window size in the dark channel defogging algorithm is obtained in a self-adaptive manner by utilizing the information difference of different objects on the urban road in the foggy gray scale map, so that the influence of the window with a fixed size on defogging effects at different positions on the road is avoided.

Description

Smart city road monitoring system based on computer vision
Technical Field
The invention relates to the technical field of image data processing, in particular to a smart city road monitoring system based on computer vision.
Background
The intelligent urban road monitoring system is a system for detecting, managing and controlling urban road traffic in real time by utilizing advanced information technology and network communication technology. The intelligent city road monitoring system mainly collects and processes road traffic related data through high-definition cameras and sensor equipment arranged on roads, and mainly comprises functions of real-time monitoring of vehicles, illegal detection, accident early warning and the like, and the intelligent city road monitoring system has the characteristic of high definition and reliability in image information acquired by the cameras. However, in an actual environment, the quality of the collected data of the camera is greatly affected due to the influence of illumination conditions, weather factors and the like. Particularly, if the road image collected by the camera is directly analyzed aiming at the foggy image of the heavy foggy weather, the analysis result is easy to generate larger errors and even error results occur.
At present, the clearing operation of the foggy image is mainly carried out on the foggy image, the traditional image enhancement-based method is easy to cause image distortion, and the physical model-based atmospheric scattering model method is good in image defogging effect but needs to set initial parameters according to experience values, so that whether the experience values are properly selected has a large influence on the result. Each pixel point on the dark channel image in the dark channel priori defogging algorithm selects a window with a fixed size, corresponds to the filter radius with the same size, easily causes image color distortion due to the fixed window size, has poor defogging effect on the area with larger influence on fog, and influences the reliability of the smart city road monitoring system on the vehicle monitoring on the road.
Disclosure of Invention
The invention provides a smart city road monitoring system based on computer vision, which solves the problem that the fixed window size and the filter radius in the traditional dark channel defogging algorithm are easy to reduce the defogging effect in road images, and adopts the following technical scheme:
one embodiment of the invention is a computer vision-based intelligent urban road monitoring system, which comprises the following modules:
the data acquisition module is used for converting the road foggy image acquired by the image acquisition device into a foggy gray level image;
the lane region extraction module acquires a lane potential pixel point set according to gray level co-occurrence matrixes in different directions on the foggy gray level map; acquiring a lane fitting line based on a lane potential pixel point set by using a fitting algorithm; acquiring an edge difference coefficient of each pixel point on the lane fitting line according to symmetry characteristics of image information in a neighborhood window taken by each pixel point on the lane fitting line; marking pixel points with edge difference coefficients larger than a preset threshold value, taking a closed operation result of a fitting straight line obtained after the marked pixel points are deleted in a lane potential pixel point set as a real lane line, and obtaining a lane region in the fog-containing gray scale map according to fuzzy characteristics of all pixel points on the real lane line;
the window scale determining module is used for obtaining a local kurtosis heterogeneity index of each pixel point in the lane region according to texture differences between a central region of each pixel point in the lane region and different positions in a search region taken by each pixel point; acquiring a real boundary set according to the local kurtosis heterogeneity indexes of all pixel points in the lane region; obtaining a depth value of each pixel point on the fog gray scale map by using a monocular depth estimation algorithm; obtaining the defogging window size corresponding to each pixel point according to the rectangular fitting result and the depth value of the real boundary set;
the road monitoring module acquires a dark channel defogging image corresponding to the foggy gray level map according to the defogging window size corresponding to each pixel point; and realizing real-time monitoring of vehicles on the smart city road according to the defogging images of the dark channels.
Preferably, the method for acquiring the set of potential pixels of the lane according to the gray level co-occurrence matrixes in different directions on the foggy gray level map comprises the following steps:
acquiring the average value of element values of a gray level co-occurrence matrix in the same position of a gray level graph in a preset number of directions, and taking a matrix formed by the average value according to the positions of elements in the gray level co-occurrence matrix in any direction as an average gray level co-occurrence matrix of the gray level graph;
acquiring an arrangement result of element values in the average gray level co-occurrence matrix according to a descending order, and taking a set formed by gray values corresponding to a preset number of elements in the arrangement result as a screening gray value set;
and marking pixel points with gray values in a screening gray value set in the fog-containing gray map as lane potential pixel points, and taking a set formed by all the lane potential pixel points as a lane potential pixel point set.
Preferably, the method for obtaining the edge difference coefficient of each pixel point on the lane fitting line according to the symmetry characteristics of the image information in the neighborhood window taken by each pixel point on the lane fitting line comprises the following steps:
obtaining a neighborhood window with a preset size taking each pixel point on a lane fitting line as a central point, and dividing the neighborhood window into a left dividing region and a right dividing region by utilizing a straight line passing through the central point of the neighborhood window;
taking the absolute value of the difference between the average value of the gray values of the pixel points in the left divided area corresponding to each pixel point and the average value of the gray values of the pixel points in the right divided area as the left-right deviation measurement of each pixel point;
acquiring a center imbalance coefficient of each pixel point according to the distribution difference of gray values in the left and right divided areas;
the edge difference coefficient of each pixel point consists of a left deviation measurement, a right deviation measurement and a center imbalance coefficient of each pixel point, wherein the edge difference coefficient is in a direct proportion relation with the left deviation measurement, the right deviation measurement and the center imbalance coefficient.
Preferably, the method for obtaining the center imbalance coefficient of each pixel point according to the distribution difference of the gray values in the left and right divided areas comprises the following steps:
taking the accumulation of the absolute value of the gray value difference value of each pixel point and each pixel point in the corresponding left division area on the left division area as a first accumulated value of each pixel point, and taking the ratio of the first accumulated value of each pixel point to the number of the pixel points in the corresponding left division area of each pixel point as the left deviation amount of each pixel point;
taking the accumulation of the absolute value of the gray value difference value of each pixel point and each pixel point in the right dividing area corresponding to each pixel point on the right dividing area as a second accumulated value of each pixel point, and taking the ratio of the second accumulated value of each pixel point to the number of the pixel points in the right dividing area corresponding to each pixel point as the right deviation amount of each pixel point;
the absolute value of the difference between the left-side deviation amount and the right-side deviation amount of each pixel is used as the center unbalance coefficient of each pixel.
Preferably, the method for obtaining the lane region in the fog-containing gray scale map according to the fuzzy characteristics of all the pixel points on the real lane line comprises the following steps:
acquiring the distribution variance of gray values in a local window taken by taking each pixel point as a central point on each real lane line, and taking the pixel point corresponding to the local window with the distribution variance smaller than a preset threshold value as the end point of the real lane line where the pixel point is located;
and taking a connecting line of the end points of the two adjacent real lane lines as an upper boundary, taking a connecting line of the intersection points of the two adjacent real lane lines and the fog-containing gray level map as a lower boundary, and taking a region formed by the upper boundary, the lower boundary and the two adjacent real lane lines as a lane region corresponding to the two adjacent real lane lines.
Preferably, the method for obtaining the local kurtosis heterogeneity index of each pixel in the lane region according to the texture difference between the central region of each pixel in the lane region and the search region of each pixel comprises the following steps:
obtaining a local kurtosis value of each neighborhood window according to gray value distribution among pixel points in the neighborhood window;
taking a neighborhood window taking each pixel point as a center area of each pixel point, taking accumulation of absolute values of differences between local kurtosis values of the center area of each pixel point and local kurtosis values of each neighborhood window in a search area taken by each pixel point as a numerator, taking sliding times of the neighborhood window in the search area as a denominator, and taking a ratio of the numerator to the denominator as a local kurtosis heterogeneity index of each pixel point.
Preferably, the method for obtaining the local kurtosis value of each neighborhood window according to the gray value distribution among the pixel points in the neighborhood window comprises the following steps:
taking the difference value of the gray value mean value of each pixel point in each neighborhood window and each pixel point in each neighborhood window as a numerator, taking the distribution variance of the gray value of each pixel point in each neighborhood window as a denominator, and taking the accumulation of the square of the ratio of the numerator to the denominator on each neighborhood window as a first composition factor of each pixel point;
and taking the average value of the first composition factors of all the pixel points in each neighborhood window as the local kurtosis value of each neighborhood window.
Preferably, the method for obtaining the real boundary set according to the local kurtosis heterogeneity indexes of all the pixel points in the lane region comprises the following steps:
and marking the pixel points with the local kurtosis heterogeneity indexes larger than a preset threshold value as target pixel points, respectively calculating the difference value of gradient amplitude values between the target pixel points and each pixel point in the four adjacent areas of the target pixel points, and taking a set formed by the pixel points corresponding to the maximum value in the difference value of the gradient amplitude values and the target pixel points as a real boundary set.
Preferably, the method for obtaining the defogging window size corresponding to each pixel point according to the rectangular fitting result and the depth value of the real boundary set comprises the following steps:
taking the ratio of the rectangular degree of the fitting rectangle where each pixel point in the real boundary set is positioned to a preset threshold value as a second composition factor, and taking the sum of the local kurtosis heterogeneity index of each pixel point and the second composition factor as a vehicle judgment coefficient of each pixel point;
taking the sum of the vehicle judgment coefficient and the preset area coefficient of each pixel as a numerator, taking the product of the depth value of each pixel and the preset parameter as a denominator, and taking the rounding result of the ratio of the numerator to the denominator as the side length of the defogging window of each pixel.
Preferably, the method for obtaining the defogging image of the dark channel corresponding to the foggy gray scale map according to the defogging window size corresponding to each pixel point comprises the following steps:
and taking half of the side length of the defogging window corresponding to each pixel point as the filtering radius corresponding to each pixel point in the dark channel defogging algorithm, and obtaining a dark channel defogging image corresponding to the foggy gray scale image by utilizing minimum value filtering.
The beneficial effects of the invention are as follows: according to the method, through analysis of influence degrees of heavy fog weather on different areas on the road, the edge difference coefficient is constructed according to the texture characteristics of the lane lines, the symmetry characteristics of the lane lines are considered by the edge difference coefficient, and the lane areas can be accurately separated from the fog-containing gray level diagram; and constructing a local kurtosis difference index based on information change of a neighborhood window sliding process in a neighborhood range taken by each pixel point in a lane region, wherein the local kurtosis difference index considers the characteristic of large gray information change in different regions on a vehicle in a fog-containing gray map, then constructing a suspected vehicle contour judgment window coefficient, judging the window coefficient through vehicle information, and finally combining depth information and region information of an image to adaptively adjust the window size. According to the invention, the window side length and the filter radius can be adaptively adjusted according to the image characteristics of different areas in the fog-containing gray level diagram, the defogging effect of the vehicle window license plate area is enhanced on the premise of reserving texture and color information to a large extent, and the recognition accuracy of the intelligent city road monitoring system on various vehicle driving states and illegal behaviors on the road is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic flow chart of a system for monitoring a smart city road based on computer vision according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an acquisition lane area according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of a smart city road monitoring system based on computer vision according to an embodiment of the present invention is shown, the system includes: the system comprises a data acquisition module, a lane region extraction module, a window scale determination module and a road monitoring module.
And the data acquisition module is used for installing a high-definition CCD camera at a street intersection, acquiring traffic monitoring videos on the urban road under foggy weather by using the CCD camera, and carrying out defogging treatment on each frame of lane image in the traffic monitoring videos, wherein the lane images are RGB images. Because noise interference can not occur when a lane image is acquired, in order to reduce the influence on the image quality, the noise removal processing is required to be carried out on each frame of lane image, the method adopts a Gaussian filtering method to carry out noise reduction on the lane image, the purpose of eliminating random noise in the image is achieved by carrying out convolution operation on a foggy image by utilizing a Gaussian kernel function, the denoised lane image is subjected to graying, and the graying result of the k frame denoised lane image is recorded as a foggy gray imageGaussian filtering denoising and image gray scale are known techniques, and detailed processes are not repeated.
So far, the fog-containing gray level map of each frame in the urban road foggy-day monitoring video is obtained and is used for extracting the subsequent lane area.
The invention adopts a dark channel prior defogging algorithm to defog images for each frame of foggy gray level images, so that the dark channel images of the foggy gray level images are needed to be acquired firstly. The traditional dark channel defogging algorithm is to select the minimum value of each pixel point in the RGB three-channel image to form a dark channel image. However, for the monitoring system of the smart city road, because the monitoring distances between different vehicles on the road in each frame of the fog-containing gray level diagram and the CCD camera are different, the visual blurring degree of different vehicles is also different, and besides, because the concentration of different fog in the fog-containing gray level diagram according to the distance from the CCD camera is also inconsistent, the defogging of different degrees is needed to be carried out, so that a clear image for monitoring the smart city road can be obtained.
Firstly, dividing the region of each frame of fog-containing gray scale image to obtain the fog-containing gray scale imageFor example, a foggy gray-scale map is obtained +.>Gray level co-occurrence matrix for foggy gray level map +.>The 256 gray levels of the inner pixel are quantized to 64 gray levels. Because the lane line area is affected by foggy days, the gray values of the lane line area have certain differences at different distances, but the lane line area has the same texture information, so that the influence of the gray value differences caused by the monitoring distance can be reduced through a quantization mode.
Obtaining a foggy gray scale imageGray level co-occurrence matrixes in four directions of 0 degree, 45 degree, 90 degree and 135 degree are respectively used for calculating the average value of element values of the gray level co-occurrence matrixes in the four directions at the same position, and a matrix formed by the average value according to the positions of elements in the gray level co-occurrence matrixes in any one direction is recorded as a fog-containing gray level map->An average gray level co-occurrence matrix G of (a). The gray level co-occurrence matrix is a known technology, and the specific process is not described again. Because of the existence of the lane lines on the urban road, a certain number of elements in the average gray level co-occurrence matrix G are deviated from the average, so that the elements at all positions in the average gray level co-occurrence matrix G are arranged in a descending order, the first A elements in the ordering result are taken, the size of A is taken as a test value of 10, and the foggy gray level map is taken as a->The middle gray value is that the pixel points of the gray value corresponding to the first A elements are marked as lane potential pixel points, and a set formed by all the lane potential pixel points is used as a lane potential pixel point set P.
In each frame of road image of the intelligent city road monitoring video, because the shooting range of the CCD camera is limited, a fitting algorithm is utilized to obtain a fitting straight line representing a lane line in each frame of fog-containing gray level image, a Hough straight line fitting algorithm is utilized to obtain a fitting straight line corresponding to a pixel point in a lane potential pixel point set P, and the Hough straight line fitting is a known technology, and the specific process is not repeated. In order to avoid the interference of other auxiliary lines or road mark lines on the road to the lane lines, the fog-containing gray level map is displayedThe length and the width of the line are respectively marked as H, W, and a fitting straight line with the length of the line segment being greater than a length threshold value is used as a fog-containing gray level map +.>Lane fitting line on. The length threshold value is set to be half of the larger value of the H, W parameters, and an implementer can select an appropriate length threshold value according to the actual condition of the road.
Further, according to the fog-containing gray scaleThe characteristic of symmetrical image information in the left and right neighborhood range on the lane fitting line is that each pixel point on the lane fitting line is taken as a central point to set the size as +.>A straight line passing through each pixel point and vertical to the upper boundary and the lower boundary of the neighborhood window is used as a dividing line, the dividing line divides the neighborhood window into a left dividing region and a right dividing region, and the left dividing region and the right dividing region which are obtained by dividing the neighborhood window taken by the pixel point i on the lane fitting line are respectively marked as ++>、/>
Based on the analysis, an edge difference coefficient V is constructed, and the edge difference coefficient of the pixel point i on the lane fitting line is calculated
In the method, in the process of the invention,is the center imbalance coefficient of pixel point i, < ->Is the gray value of pixel i, +.>、/>Are respectively->、/>The number of inner pixels, +.>、/>Left divided region ∈>Inner pixel a, right split area +.>Gray value of the inner pixel point b;
、/>left divided region ∈>Inner right divided region->The average value of the gray values of the inner pixels.
Wherein, pixel point i takes the left area in the neighborhood windowThe larger the difference of gray value distribution of the inner pixel points is, the first accumulated value +.>The larger the value of (2), the left deviation amount +.>The larger the value of (2), the right split area is similarly +.>The larger the difference between the gray value of the inner pixel point and the gray value of the pixel point i is, the second accumulated value is +.>The larger the value of (2), the right-side deviation amountThe greater the value of (2); if the symmetry between the pixel points in the divided areas at the left and right sides of the neighborhood window is stronger, the left deviation amount and the right deviation amount are more approximate, and the left deviation amount and the right deviation amount are +.>The smaller the value of (2); left divided region->Inner right divided region->The closer the mean value of the gray values of the inner pixels is, the left and right deviation measure +.>The smaller the value of +.>The smaller the value of (2).
When the pixel points are positioned on the lane lines, the two sides have certain symmetry, so that the gray values of the left side and the right side are basically equal, the smaller the edge difference coefficient of the pixel points is, the pixel points with the edge difference coefficient V larger than the judgment threshold value are marked as non-lane line pixel points, the judgment threshold value is obtained by a test value 40, all marked pixel points are deleted from the lane potential pixel point set P, the fitting straight line corresponding to the residual pixel points in the pixel point set P is obtained again by using a Hough straight line fitting algorithm, and the closed operation result of the fitting straight line of the residual pixel points is marked as a real lane line.
Further, for any real lane line, acquiring a distribution variance of gray values in a local window taken by taking each pixel point as a center point on the real lane line, taking a pixel point corresponding to the local window with the distribution variance smaller than a threshold value as an end point of the real lane line, and taking an empirical value of 30 as the threshold value; and secondly, acquiring the end points of two adjacent real lane lines, taking a connecting line of the end points of the two adjacent real lane lines as an upper boundary, taking a connecting line of the intersection points of the two adjacent real lane lines and the fog-containing gray level map as a lower boundary, and taking a region formed by the upper boundary, the lower boundary and the two adjacent real lane lines as a lane region corresponding to the two adjacent lane lines, as shown in fig. 2.
So far, a lane area in the fog-containing gray level diagram is obtained and is used for acquiring the window scale of each pixel point in a subsequent dark channel defogging algorithm.
The window scale determining module is used for mainly monitoring the illegal behaviors of the road, so that the condition of the vehicle on the road needs to be analyzed in a key way, and the condition of the vehicle on the road is further judged according to the lane area obtained by the steps. The gradation values of the road surface are substantially uniform in the road area, and the gradation difference varies only in the vehicle area.
For fog-containing grey scale patternsThe pixel point f on the middle lane area is set as the size of +.>Is taken as the search area R, taking the pixel point f as the center point +.>The neighborhood window with the size is taken as a central area, and a sliding window W with the same window size as the central area is utilized to slide in the search area by taking a pixel point as a scale, so that the sliding window slides from left to right and from top to bottom in sequence in the search area.
And evaluating the local texture characteristics of the pixel point f according to the difference between the sliding window W and the pixel information in the central area in the sliding process. Calculating the local kurtosis heterogeneity index of the pixel point f
In the method, in the process of the invention,is the local kurtosis value corresponding to the jth sliding position of the sliding window W,/or%>Is the number of pixels in the sliding window W, < >>Is 25, x is the x pixel point in the j-th sliding position of the sliding window W,/and>、/>respectively the mean value and the distribution variance of the gray values of the pixel points in the jth sliding position of the sliding window W;
m is the search area traversed by the sliding window W onceNumber of slides needed, ++>Is the corresponding local kurtosis value when the sliding window overlaps the central region.
Wherein the symmetry of the left and right sides of the pixel point in the jth sliding position of the sliding window W is worse,the larger the value of (2), the first composition factor +.>The greater the value of +.>The greater the value of (2); the greater the difference between the pixels in the central area of the pixel and the pixels in the different positions of the search area, the +.>The greater the value of (2). Local kurtosis heterogeneity index->Reflecting the difference of gray values in each pixel point and the neighborhood region in the lane region, when the image isWhen the pixel f is on the road, the gray value of the pixel f in the search range is uniform and the gray value of the pixel f is +.>When the pixel point f is on the vehicle, the gray scale difference in the search range is larger>The value of (2) is larger.
The method comprises the steps of obtaining a local kurtosis heterogeneity index of each pixel point in a lane region, obtaining a segmentation threshold value of the local kurtosis heterogeneity index by using a maximum inter-class variance method, marking the pixel point with the local kurtosis heterogeneity index larger than the segmentation threshold value as a target pixel point, wherein the maximum inter-class variance method is a known technology, and specific processes are not repeated. Further, a Sobel operator is utilized to obtain the gradient amplitude value of each pixel point on the fog-containing gray scale map, the difference value of the gradient amplitude value between each pixel point in the four adjacent areas of any one target pixel point is calculated, and a set formed by the pixel point corresponding to the maximum value in the difference value of the gradient amplitude values and the target pixel point is used as a real boundary set.
Further, rectangular fitting is performed on the pixels of the real boundary set, so that the purpose of the rectangular fitting is to obtain a license plate area on the vehicle and a window area of an approximate matrix because most of the pixels marked as target pixels are pixels on the vehicle running on the lane.
In addition, the distance between the pixel point in each frame of the fog-containing gray level image in the urban road monitoring video and the camera is also an important judgment basis for the size of the adaptive window. The method comprises the steps that a certain motion correlation exists between pixel points in two adjacent frames of fog-containing gray level images in an urban road monitoring video, each frame of fog-containing gray level image in the urban road monitoring video is taken as input, a shadow restoration shape SFS algorithm is utilized to obtain a depth image corresponding to each frame of fog-containing gray level image, the depth image is consistent with the original size of the image, the value of each pixel point corresponds to the depth value of the pixel point, and the fog-containing gray level image is obtainedThe corresponding depth map is +.>. The closer the pixel is to the camera, the lower the fog concentration, and conversely, the higher the fog concentration. Therefore, the window size of each pixel point in defogging is obtained based on the rectangular fitting result and the depth value, and defogging window scale of the pixel point f in the real boundary set is calculated>
In the method, in the process of the invention,is the vehicle determination coefficient of pixel f, +.>Is the local kurtosis heterogeneity index of pixel point f,/->Is the rectangular degree of the fitting rectangle where the pixel point f is located, < >>Is a rectangular decision threshold,>taking an empirical value of 3;
is the defogging window scale of pixel point f, < >>Is a downward rounding function,/>Is the normalized result of the depth value of pixel point f in the depth image, < >>Is a distance determining factor, < >>The size of (2) is 0.5.
Wherein the more likely the fitting rectangle where the pixel point f is located is a rectangular area on the vehicle,the larger the value of (2) the second composition factor +.>The greater the value of (2), the vehicle determination coefficient +.>The larger the value of (2), the corresponding defogging window dimension +.>The larger the detail information remains the more complete.
Further, the defogging window scale of each pixel point in the real boundary set is obtained, the window size and the filtering radius of each pixel point in the defogging process of the dark channel are determined according to the defogging window scale of the pixel point, and in the invention, one half of the defogging window scale of each pixel point is taken as the corresponding filtering radius, for example, the defogging window size of the pixel point f is as followsThe filter radius of the pixel point f is +.>
The defogging window scale of each pixel point in the real boundary set is obtained so as to be used for obtaining the monitoring video with higher definition later.
And the road monitoring module is used for maintaining the defogging scale and the filtering radius in the original algorithm for the pixel points in the non-real boundary set in the foggy gray level diagram. The larger the defogging window of the pixel point is, the more complete the corresponding color and detail information is reserved, so that the closer the pixel point is to the camera, the smaller the influence of fog is, meanwhile, the more abundant the vehicle information is, the larger defogging window is adopted for filtering, but the large window filtering is adjusted for license plates and window areas in the vehicle, and the defogging effect is enhanced. Meanwhile, the far away from the camera is greatly affected by fog, the transmission of the fog is carried out, color and texture information is less, and a smaller defogging window is adopted for filtering, so that defogging effect can be enhanced.
The filtering mode used in the invention is minimum value filtering, the defogging window size and the filtering radius of each pixel point in each frame of foggy gray level image are obtained in a self-adaptive manner according to the steps, the clear image of each frame of foggy gray level image is obtained by using a dark channel defogging algorithm, and the dark channel defogging algorithm is a known technology, and the specific process is not repeated. Further, clear images corresponding to each frame of lane image in the urban road monitoring video are obtained, all the clear images are transmitted to a road monitoring center according to time sequence, and the road monitoring center records whether illegal behaviors occur to vehicles on a road according to the running states of the vehicles in the clear images.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. The intelligent urban road monitoring system based on computer vision is characterized by comprising the following modules:
the data acquisition module is used for converting the road foggy image acquired by the image acquisition device into a foggy gray level image;
the lane region extraction module acquires a lane potential pixel point set according to gray level co-occurrence matrixes in different directions on the foggy gray level map; acquiring a lane fitting line based on a lane potential pixel point set by using a fitting algorithm; acquiring an edge difference coefficient of each pixel point on the lane fitting line according to symmetry characteristics of image information in a neighborhood window taken by each pixel point on the lane fitting line; marking pixel points with edge difference coefficients larger than a preset threshold value, taking a closed operation result of a fitting straight line obtained after the marked pixel points are deleted in a lane potential pixel point set as a real lane line, and obtaining a lane region in the fog-containing gray scale map according to fuzzy characteristics of all pixel points on the real lane line;
the window scale determining module is used for obtaining a local kurtosis heterogeneity index of each pixel point in the lane region according to texture differences between a central region of each pixel point in the lane region and different positions in a search region taken by each pixel point; acquiring a real boundary set according to the local kurtosis heterogeneity indexes of all pixel points in the lane region; obtaining a depth value of each pixel point on the fog gray scale map by using a monocular depth estimation algorithm; obtaining the defogging window size corresponding to each pixel point according to the rectangular fitting result and the depth value of the real boundary set;
the road monitoring module acquires a dark channel defogging image corresponding to the foggy gray level map according to the defogging window size corresponding to each pixel point; and realizing real-time monitoring of vehicles on the smart city road according to the defogging images of the dark channels.
2. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the set of potential pixels of the lane according to the gray level co-occurrence matrix of different directions on the foggy gray level map comprises the following steps:
acquiring the average value of element values of a gray level co-occurrence matrix in the same position of a gray level graph in a preset number of directions, and taking a matrix formed by the average value according to the positions of elements in the gray level co-occurrence matrix in any direction as an average gray level co-occurrence matrix of the gray level graph;
acquiring an arrangement result of element values in the average gray level co-occurrence matrix according to a descending order, and taking a set formed by gray values corresponding to a preset number of elements in the arrangement result as a screening gray value set;
and marking pixel points with gray values in a screening gray value set in the fog-containing gray map as lane potential pixel points, and taking a set formed by all the lane potential pixel points as a lane potential pixel point set.
3. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the edge difference coefficient of each pixel point on the lane fitting line according to the symmetry feature of the image information in the neighborhood window taken by each pixel point on the lane fitting line comprises the following steps:
obtaining a neighborhood window with a preset size taking each pixel point on a lane fitting line as a central point, and dividing the neighborhood window into a left dividing region and a right dividing region by utilizing a straight line passing through the central point of the neighborhood window;
taking the absolute value of the difference between the average value of the gray values of the pixel points in the left divided area corresponding to each pixel point and the average value of the gray values of the pixel points in the right divided area as the left-right deviation measurement of each pixel point;
acquiring a center imbalance coefficient of each pixel point according to the distribution difference of gray values in the left and right divided areas;
the edge difference coefficient of each pixel point consists of a left deviation measurement, a right deviation measurement and a center imbalance coefficient of each pixel point, wherein the edge difference coefficient is in a direct proportion relation with the left deviation measurement, the right deviation measurement and the center imbalance coefficient.
4. The computer vision-based intelligent urban road monitoring system according to claim 3, wherein the method for obtaining the center unbalance coefficient of each pixel point according to the distribution difference of gray values in the left and right divided areas comprises the following steps:
taking the accumulation of the absolute value of the gray value difference value of each pixel point and each pixel point in the corresponding left division area on the left division area as a first accumulated value of each pixel point, and taking the ratio of the first accumulated value of each pixel point to the number of the pixel points in the corresponding left division area of each pixel point as the left deviation amount of each pixel point;
taking the accumulation of the absolute value of the gray value difference value of each pixel point and each pixel point in the right dividing area corresponding to each pixel point on the right dividing area as a second accumulated value of each pixel point, and taking the ratio of the second accumulated value of each pixel point to the number of the pixel points in the right dividing area corresponding to each pixel point as the right deviation amount of each pixel point;
the absolute value of the difference between the left-side deviation amount and the right-side deviation amount of each pixel is used as the center unbalance coefficient of each pixel.
5. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the lane area in the foggy gray scale map according to the fuzzy characteristics of all the pixels on the real lane is as follows:
acquiring the distribution variance of gray values in a local window taken by taking each pixel point as a central point on each real lane line, and taking the pixel point corresponding to the local window with the distribution variance smaller than a preset threshold value as the end point of the real lane line where the pixel point is located;
and taking a connecting line of the end points of the two adjacent real lane lines as an upper boundary, taking a connecting line of the intersection points of the two adjacent real lane lines and the fog-containing gray level map as a lower boundary, and taking a region formed by the upper boundary, the lower boundary and the two adjacent real lane lines as a lane region corresponding to the two adjacent real lane lines.
6. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the local kurtosis heterogeneity index of each pixel point in the lane region according to the texture difference between the central region where each pixel point in the lane region is located and the search region taken by each pixel point is as follows:
obtaining a local kurtosis value of each neighborhood window according to gray value distribution among pixel points in the neighborhood window;
taking a neighborhood window taking each pixel point as a center area of each pixel point, taking accumulation of absolute values of differences between local kurtosis values of the center area of each pixel point and local kurtosis values of each neighborhood window in a search area taken by each pixel point as a numerator, taking sliding times of the neighborhood window in the search area as a denominator, and taking a ratio of the numerator to the denominator as a local kurtosis heterogeneity index of each pixel point.
7. The computer vision-based intelligent urban road monitoring system according to claim 6, wherein the method for obtaining the local kurtosis value of each neighborhood window according to the gray value distribution among the pixels in the neighborhood window comprises:
taking the difference value of the gray value mean value of each pixel point in each neighborhood window and each pixel point in each neighborhood window as a numerator, taking the distribution variance of the gray value of each pixel point in each neighborhood window as a denominator, and taking the accumulation of the square of the ratio of the numerator to the denominator on each neighborhood window as a first composition factor of each pixel point;
and taking the average value of the first composition factors of all the pixel points in each neighborhood window as the local kurtosis value of each neighborhood window.
8. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the real boundary set according to the local kurtosis heterogeneity index of all pixels in the lane region is as follows:
and marking the pixel points with the local kurtosis heterogeneity indexes larger than a preset threshold value as target pixel points, respectively calculating the difference value of gradient amplitude values between the target pixel points and each pixel point in the four adjacent areas of the target pixel points, and taking a set formed by the pixel points corresponding to the maximum value in the difference value of the gradient amplitude values and the target pixel points as a real boundary set.
9. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the defogging window size corresponding to each pixel point according to the rectangular fitting result and the depth value of the real boundary set comprises the following steps:
taking the ratio of the rectangular degree of the fitting rectangle where each pixel point in the real boundary set is positioned to a preset threshold value as a second composition factor, and taking the sum of the local kurtosis heterogeneity index of each pixel point and the second composition factor as a vehicle judgment coefficient of each pixel point;
taking the sum of the vehicle judgment coefficient and the preset area coefficient of each pixel as a numerator, taking the product of the depth value of each pixel and the preset parameter as a denominator, and taking the rounding result of the ratio of the numerator to the denominator as the side length of the defogging window of each pixel.
10. The computer vision-based intelligent urban road monitoring system according to claim 1, wherein the method for obtaining the dark channel defogging image corresponding to the foggy gray scale map according to the defogging window size corresponding to each pixel point comprises the following steps:
and taking half of the side length of the defogging window corresponding to each pixel point as the filtering radius corresponding to each pixel point in the dark channel defogging algorithm, and obtaining a dark channel defogging image corresponding to the foggy gray scale image by utilizing minimum value filtering.
CN202311346643.7A 2023-10-18 2023-10-18 Smart city road monitoring system based on computer vision Active CN117094914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311346643.7A CN117094914B (en) 2023-10-18 2023-10-18 Smart city road monitoring system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311346643.7A CN117094914B (en) 2023-10-18 2023-10-18 Smart city road monitoring system based on computer vision

Publications (2)

Publication Number Publication Date
CN117094914A true CN117094914A (en) 2023-11-21
CN117094914B CN117094914B (en) 2023-12-12

Family

ID=88772063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311346643.7A Active CN117094914B (en) 2023-10-18 2023-10-18 Smart city road monitoring system based on computer vision

Country Status (1)

Country Link
CN (1) CN117094914B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522758A (en) * 2024-01-04 2024-02-06 深圳对对科技有限公司 Smart community resource management method and system based on big data
CN117636482A (en) * 2024-01-26 2024-03-01 东莞市杰瑞智能科技有限公司 Visual detection system for urban road personnel behavior
CN117788345A (en) * 2023-12-11 2024-03-29 济宁华矿地质勘查有限公司 Land remote sensing image correction and enhancement method
CN117952958A (en) * 2024-03-21 2024-04-30 陕西中环机械有限责任公司 Drum filter screen intelligent health detection method based on machine learning
CN117994165A (en) * 2024-04-02 2024-05-07 湖南亿盛科技有限公司 Intelligent campus management method and system based on big data
CN118155149A (en) * 2024-02-27 2024-06-07 嘉兴索思瑞克电子科技有限公司 Intelligent monitoring system for smart city roads
CN118612391A (en) * 2024-08-08 2024-09-06 联一信息技术(北京)有限公司 Security monitoring video real-time transmission method and system based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120301033A1 (en) * 2011-05-23 2012-11-29 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and computer readable medium
US20180197021A1 (en) * 2017-01-11 2018-07-12 Foundation Of Soongsil University Industry Cooperation Lane detection device, lane departure determination device, lane detection method and lane departure determination method
CN111275698A (en) * 2020-02-11 2020-06-12 长安大学 Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation
US20200380649A1 (en) * 2019-05-27 2020-12-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image capturing apparatus, and storage medium
CN112308801A (en) * 2020-11-20 2021-02-02 汪秀英 Road traffic tracking management method and system based on big data image acquisition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120301033A1 (en) * 2011-05-23 2012-11-29 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and computer readable medium
US20180197021A1 (en) * 2017-01-11 2018-07-12 Foundation Of Soongsil University Industry Cooperation Lane detection device, lane departure determination device, lane detection method and lane departure determination method
US20200380649A1 (en) * 2019-05-27 2020-12-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image capturing apparatus, and storage medium
CN111275698A (en) * 2020-02-11 2020-06-12 长安大学 Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation
CN112308801A (en) * 2020-11-20 2021-02-02 汪秀英 Road traffic tracking management method and system based on big data image acquisition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭梦凡 等: "一种基于阈带分割的公路能见度动态检测方法", 《太原科技大学学报》, vol. 44, no. 2, pages 125 - 130 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117788345A (en) * 2023-12-11 2024-03-29 济宁华矿地质勘查有限公司 Land remote sensing image correction and enhancement method
CN117522758A (en) * 2024-01-04 2024-02-06 深圳对对科技有限公司 Smart community resource management method and system based on big data
CN117522758B (en) * 2024-01-04 2024-03-26 深圳对对科技有限公司 Smart community resource management method and system based on big data
CN117636482A (en) * 2024-01-26 2024-03-01 东莞市杰瑞智能科技有限公司 Visual detection system for urban road personnel behavior
CN117636482B (en) * 2024-01-26 2024-04-09 东莞市杰瑞智能科技有限公司 Visual detection system for urban road personnel behavior
CN118155149A (en) * 2024-02-27 2024-06-07 嘉兴索思瑞克电子科技有限公司 Intelligent monitoring system for smart city roads
CN117952958A (en) * 2024-03-21 2024-04-30 陕西中环机械有限责任公司 Drum filter screen intelligent health detection method based on machine learning
CN117952958B (en) * 2024-03-21 2024-06-07 陕西中环机械有限责任公司 Drum filter screen intelligent health detection method based on machine learning
CN117994165A (en) * 2024-04-02 2024-05-07 湖南亿盛科技有限公司 Intelligent campus management method and system based on big data
CN118612391A (en) * 2024-08-08 2024-09-06 联一信息技术(北京)有限公司 Security monitoring video real-time transmission method and system based on Internet of things

Also Published As

Publication number Publication date
CN117094914B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN117094914B (en) Smart city road monitoring system based on computer vision
CN111310558B (en) Intelligent pavement disease extraction method based on deep learning and image processing method
CN111145161B (en) Pavement crack digital image processing and identifying method
CN105488454B (en) Front vehicles detection and ranging based on monocular vision
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN109657632B (en) Lane line detection and identification method
CN107330376B (en) Lane line identification method and system
CN116758059B (en) Visual nondestructive testing method for roadbed and pavement
CN109255350B (en) New energy license plate detection method based on video monitoring
CN104899554A (en) Vehicle ranging method based on monocular vision
CN103630496B (en) Based on the traffic video visibility detecting method of road surface apparent brightness and least square method
CN116630813B (en) Highway road surface construction quality intelligent detection system
CN108280450A (en) A kind of express highway pavement detection method based on lane line
CN108171695A (en) A kind of express highway pavement detection method based on image procossing
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN113554646B (en) Intelligent urban road pavement detection method and system based on computer vision
CN113239733B (en) Multi-lane line detection method
CN113111722A (en) Automatic driving target identification method based on improved Mask R-CNN
CN111753749A (en) Lane line detection method based on feature matching
CN106407959A (en) Low-illumination complicated background license plate positioning method based on wavelet transform and SVM
CN115909256B (en) Road disease detection method based on road visual image
CN111652033A (en) Lane line detection method based on OpenCV
CN110349415B (en) Driving speed measuring method based on multi-scale transformation
CN116152115A (en) Garbage image denoising processing method based on computer vision
CN107301388A (en) A kind of automatic vehicle identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant