CN111611930B - Parking space line detection method based on illumination consistency - Google Patents

Parking space line detection method based on illumination consistency Download PDF

Info

Publication number
CN111611930B
CN111611930B CN202010440287.5A CN202010440287A CN111611930B CN 111611930 B CN111611930 B CN 111611930B CN 202010440287 A CN202010440287 A CN 202010440287A CN 111611930 B CN111611930 B CN 111611930B
Authority
CN
China
Prior art keywords
pixel
value
illumination
pixel point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010440287.5A
Other languages
Chinese (zh)
Other versions
CN111611930A (en
Inventor
周小兵
刘诗萌
但孝杰
陈志华
刘潇丽
仇谷浩
仇隽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayu Automotive Systems Co Ltd
Original Assignee
Huayu Automotive Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayu Automotive Systems Co Ltd filed Critical Huayu Automotive Systems Co Ltd
Priority to CN202010440287.5A priority Critical patent/CN111611930B/en
Publication of CN111611930A publication Critical patent/CN111611930A/en
Application granted granted Critical
Publication of CN111611930B publication Critical patent/CN111611930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a parking space line detection method based on illumination consistency, which comprises the steps of denoising an image by adopting filtering; processing the image by using a four-way sensitive filter, and extracting to obtain illumination consistent characteristics of each pixel point; binarization processing is carried out on each pixel point according to the illumination consistency characteristics, so that a plurality of communication areas are obtained; and performing open operation processing on the image, simultaneously obtaining a plurality of open operation results of the connected regions, and reserving the open operation results of the connected regions, which are larger than 60% of the area of the largest connected region. According to the parking space line detection method based on illumination consistency, the contribution values of the neighborhood pixels are calculated from the four directions of the pixels by adopting the one-dimensional four-way sensitive filter, so that the illumination consistency characteristic is applied to parking space line detection to remove the influence of illumination on an original image, and the image is subjected to binarization processing to reserve the interested part in the image to the greatest extent, so that the accurate reservation of the parking space line under different illumination conditions is realized.

Description

Parking space line detection method based on illumination consistency
Technical Field
The invention belongs to the field of automatic driving of vehicles, and relates to a parking space line detection method.
Background
With the rapid development of technology, the automatic driving technology has become a research hotspot in the field of artificial intelligence, intelligent parking technology is becoming more and more popular, and a plurality of high-tech products are assembled on automobiles, so that an automatic parking system is one of the high-tech products. The automatic parking system can improve the convenience of parking to a certain extent, especially for drivers with fuzzy parking concepts and fear, the problem of difficult parking can be effectively relieved and solved, thereby shortening the parking time and further improving the traffic network efficiency.
In the construction of an automatic parking system, there are a number of key issues that need to be addressed, namely how to quickly and accurately detect and locate parking spaces around a vehicle. The sensors used for detecting the parking spaces mainly comprise visual sensors and ranging sensors.
In the conventional automatic parking system scheme, parking space sensing is mainly performed through an ultrasonic radar, and adjacent vehicles can be easily identified. Because of the limitation of the working principle of the ultrasonic radar, namely, under the condition that no adjacent vehicle exists, an idle parking space cannot be found, the accuracy of the ultrasonic radar depends on the position of the adjacent vehicle, and the ultrasonic radar has poor adaptability to parking space scenes.
In contrast, the multi-vision sensor-based method allows parking spaces to be identified more accurately, since its identification process is independent of the presence of neighboring vehicles. In recent years, along with the rapid increase of the demand for automatic parking systems, various parking space line detection methods based on multiple vision sensors have been proposed, however, the previous methods mostly do not consider the influence of different illumination conditions on the parking space line detection, and the parking space line detection method of the present invention aims at solving the problem of parking space line identification under different illumination conditions.
Disclosure of Invention
The invention aims to provide a parking space line detection method based on consistent illumination, which is used for introducing consistent illumination characteristics into a parking space line detection system of a parking lot so as to achieve higher parking space line detection accuracy.
In order to achieve the above purpose, the invention provides a parking space line detection method based on consistent illumination, comprising the following steps:
s1: denoising the image to be detected by adopting filtering;
s2: processing the denoised image to be detected by using a four-way sensitive filter, and extracting illumination consistent characteristics of each pixel point of the image to be detected;
s3: according to the illumination consistent characteristic zeta u Performing binarization processing on each pixel point of the image to be detected to obtain a plurality of communication areas;
s4: and performing open operation processing on the image to be detected, simultaneously calculating to obtain a plurality of open operation results of the communication areas, and reserving the open operation results of the communication areas, which are larger than 60% of the area of the largest communication area.
In said step S1, the filtering employed includes gaussian filtering and guided filtering.
The step S2 includes:
step S21: calculating components of the four-way sensitive filter value of each pixel point in the image to be detected after the de-drying treatment along 4 different directions;
the component of the four-way sensitive filter value of the u-th pixel along one direction lambda is as follows:
wherein Y is v Represents the gray value of the v-th pixel, and H represents the gray value Y of the v-th pixel v Is a histogram of (1); b is the number of bins of the histogram H, β ε (0, 1) represents the falling factor from the u-th pixel to the v-th pixel, |u-v| is the spatial distance between the u-th pixel and the v-th pixel; v (Y) v B) represents the gray value distribution function of the v-th pixel, when the gray value Y of the v-th pixel v When belonging to the b-th groove, V (Y v The value of b) is 1, otherwise V (Y) v B) has a value of 0, b isAn integer between 1 and dimension B;
step S22: according to the components of the four-way sensitive filter value of each pixel along 4 different directions, the weights of the pixels are accumulated to obtain the four-way sensitive filter value of each pixel on each slot of the histogram H in the image to be detected;
step S23: normalizing the four-way sensitive filter value of each pixel point on each slot of the histogram H to obtain a normalization factor of each pixel point;
step S24: acquiring a four-way sensitive filter difference value between two local areas with the u pixel point and the v pixel point as centers by adopting the normalization factors of the pixel points;
step S25: extracting illumination consistent characteristics;
in the step S21, a component of the four-way sensitive filter value of the u-th pixel along one of the directions λThe component of the four-way sensitive filter value of the u-th pixel along one of the directions lambda is calculated by adopting the method of integrating the histogram and based on the previous pixel>The method comprises the following steps:
wherein, beta epsilon (0, 1) is a reduction factor, V (Y) v B) represents a gray value distribution function of the u-th pixel, the gray value Y of the u-th pixel u When belonging to the b-th groove, V (Y u The value of b) is 1, otherwise V (Y) u The value of b) is 0,is the component of the four-way sensitive filter value on the u-1 pixel point along one of the directions lambda;
in the step S22, the value of the four-way sensitive filter of the ith pixel point on the b th bin of the histogram H is:
wherein G is u (b) Representing the four-way sensitive filter value of the u-th pixel at the b-th bin of the histogram H,is the component of the four-way sensitive filter value on the u-1 pixel point along one of the directions lambda; v (Y) u B) represents the gray value distribution function of the ith pixel, when the gray value Y of the ith pixel u When belonging to the b-th groove, V (Y u The value of b) is 1, otherwise V (Y) u The value of b) is 0.
In the step S23, the normalization factor m of the u-th pixel point u The method comprises the following steps:
wherein m is u For normalization factor, λ is direction, β ε (0, 1) is a control parameter.
The step S24 includes: normalizing the four-way sensitive filter value of each pixel point by adopting the normalization factor, and normalizing the four-way sensitive filter value G of the ith pixel point and the nth pixel point on the b th slot of the histogram H after normalization u And G v The sum of the differences of the cumulative distribution histograms of the (b) is taken as the difference of the four-way sensitive filter between the two local areas centered by the (u) th pixel and the (v) th pixel.
The step S25 includes:
step S251: acquiring a transformation formula of gray values of pixel points of an image to be detected before and after affine illumination transformation;
step S252: the four-way sensitive filter value G u In section [ b ] u -r u ,b u +r u ]Integral value in as illumination coincidence feature zeta before affine illumination transformation u And calculate illumination consistent feature ζ 'after affine illumination transformation' u
The step S2 further includes a step S26: introducing a soft smoothing term to optimize the illumination-consistent feature ζ u
In the step S3, when binarizing, the connected region is the illumination matching feature ζ u And a connected region composed of pixel points higher than a threshold value.
In the step S4, the plurality of connected regions are calculated by using a breadth-first connected region calculation algorithm.
The parking space line detection method based on illumination consistency also comprises the following step S5: and detecting the corner points of the parking space lines based on an image skeletonization algorithm.
According to the parking space line detection method based on illumination consistency, the contribution values of the neighborhood pixels are calculated from the four directions of the pixels by adopting the one-dimensional four-way sensitive filter, so that illumination consistency characteristics are applied to parking space line detection to remove the influence of illumination on an original image, and binarization processing is carried out on the image to further reduce the influence on parking space line detection under different illumination conditions, the interested part in the image is reserved to the greatest extent, and therefore accurate reservation of parking space lines under different illumination conditions is realized. In addition, the method eliminates the interference of special objects in the image by adopting open operation, calculates the area of each communication area, reserves the area which is larger than 60% of the maximum communication area, detects the corner points of the parking space lines based on an image skeletonizing algorithm and further removes the interference, thereby realizing the accurate identification of the parking space lines and achieving higher accuracy. In addition, the parking space line detection method based on illumination consistency firstly uses Gaussian filtering and guided filtering to denoise the image, so that the influence of illumination on the original image is removed. The connected domain of the invention utilizes the consistent illumination characteristic, and can remove the miscellaneous points in the parking space line.
Drawings
Fig. 1 is a flowchart of a parking space line detection method according to an embodiment of the present invention.
Fig. 2A is a schematic diagram of an open operation result of the parking space line detection method shown in fig. 1 in step S4.
Fig. 2B is a schematic diagram of the result of reserving the open calculation result of the connected region of more than 60% of the area of the largest connected region in step S4 according to the parking space line detection method shown in fig. 1.
Fig. 3A-3B are graphs comparing the results of the parking space line detection method of the present invention with the results of the conventional Hough method, wherein fig. 3A shows the results of the conventional Hough method, and fig. 3B shows the results of the parking space line detection method of the present invention.
Detailed Description
The parking space line detection method based on consistent illumination is generally arranged on an image processing unit of a vehicle and used for parking space line detection, and a related software system consists of the following parts: taking Visual Studio 2015 as a development environment, and calling an OpenCV library to complete the realization of the algorithm.
As shown in fig. 1, the parking space line detection method based on consistent illumination of the invention comprises the following steps:
step S1: and denoising the image to be detected by adopting filtering.
In this step S1, the filtering employed includes gaussian filtering and guided filtering.
The gaussian blur is the convolution operation of the (gray) image I to be measured with a gaussian kernel. The convolution formula corresponding to Gaussian blur is:
I σ =I*G σ
wherein represents a convolution operation; g σ Is a two-dimensional gaussian kernel with standard deviation sigma, defined as:
the guiding filtering is to average all pixels in a window w taking the ith pixel as the center in the image p to obtain the ith pixel in the image q to be detected, and the guiding filtering corresponds to the formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,n is the number of pixels in window w.
Step S2: processing the denoised image to be detected by using a four-way sensitive filter (QSF), and extracting illumination consistent characteristics of each pixel point of the image to be detected;
the four-way sensitive filter (Quadrilateral Sensitive Filtering, QSF) is a new local feature provided by the invention, the feature is based on a common local histogram, the influence of all pixel points in four directions in an image is studied, and the dense feature on the image is obtained. Based on the four-way sensitive filter, a new illumination uniformity feature (Illumination Uniform Feature, IUF) can be further proposed, so that when the illumination component of the pixel block is changed, the illumination uniformity feature can be kept unchanged, and the problem of no robustness to illumination change is effectively solved.
The step S2 specifically includes:
step S21: calculating components of QSF values of each pixel point in the image to be detected after the de-drying treatment along 4 different directions; therefore, the four-way sensitive filter (QSF) value of each pixel point in the image to be detected can be obtained according to the QSF value of each pixel point along the components of 4 different directions.
Next, how to calculate the component of the four-way sensitive filter value of the pixel point along one of the directions is explained.
The component of the four-way sensitive filter value of the u-th pixel along one direction lambda is as follows:
wherein H (Y) represents a one-dimensional image formed by a straight line selected along the lambda direction of the denoised image to be measured, the pixels thereof comprise all pixels from the boundary of the denoised image to the u-th pixel, lambda represents the direction and is the interval [1,4 ]]The integer of (a) represents the up, down, left and right directions respectively; y is Y v Represents the gray value of the v-th pixel, and H represents the gray value Y of the v-th pixel v Is a vector of dimension B, B being the number of bins of the histogram H, wherein a histogram can be generally represented by a column vector (a, B in the example), each value within the column vector being a bin (a, B), bin being a bin as referred to herein; beta epsilon (0, 1) is a control parameter representing a falling factor from the u-th pixel point to the v-th pixel point, and u-v is a spatial distance between the u-th pixel point and the v-th pixel point; v (Y) v B) represents the gray value distribution function of the v-th pixel, when the gray value Y of the v-th pixel v When belonging to the b-th groove, V (Y v The value of b) is 1, otherwise V (Y) v The value of B) is 0, B being an integer between 1 and dimension B.
In this embodiment, the component of the four-way sensitive filter value of the u-th pixel along one of the directions λThe method of integrating the histogram is adopted and calculated on the basis of the previous pixel point. Therefore, the component +_of the four-way sensitive filter value of the u-th pixel along one of the directions λ>The method comprises the following steps:
wherein, beta epsilon (0, 1) is a reduction factor, V (Y) v Represented by b)Gray value distribution function of the u-th pixel point, and gray value Y of the u-th pixel point u When belonging to the b-th groove, V (Y u The value of b) is 1, otherwise V (Y) u The value of b) is 0,is the component of the four-way sensitive filter value at the u-1 th pixel point along one of the directions lambda.
Step S22: and accumulating the weights of the pixel points according to the components of the four-way sensitive filter value of each pixel point along 4 different directions to obtain a four-way sensitive filter (QSF) value of each pixel point of the image to be detected on each slot of the histogram H.
Similar to the local histogram, the component of the four-way sensitive filter value of the u-th pixel point along one of the directions λCan be regarded as integral values of all pixel contribution values in a one-dimensional image. And combining the contribution values and the weight values of all the pixel points, wherein the contribution values and the weight values of all the pixel points decrease exponentially as the space distance between the pixel points becomes larger.
The value of the four-way sensitive filter of the ith pixel point on the ith bin of the histogram H is as follows:
wherein G is u (b) Representing the four-way sensitive filter value of the u-th pixel at the b-th bin of the histogram H,is the component of the four-way sensitive filter value on the u-1 pixel point along one of the directions lambda; v (Y) u B) represents the gray value distribution function of the ith pixel, when the gray value Y of the ith pixel u When belonging to the b-th groove, V (Y u The value of b) is 1, otherwise V (Y) u The value of b) is 0.
As shown in formula (3-3) calculating the vector sum of the components of the 4 directions. Since each QSF component takes into account the pixel point u itself, the subtraction of 3V (Y v B). The QSF is calculated according to the gray value of the pixel point of the image, the image needs to be preprocessed before the OSF is calculated, and the QSF is calculated on the basis of converting the image after denoising into the gray image.
Step S23: and normalizing the four-way sensitive filter value of each pixel point on each slot of the histogram H to obtain a normalization factor of each pixel point.
Since the histogram H generally needs to be normalized, a summation operation is performed over all bins to obtain the normalization factor for each pixel.
Wherein, the normalization factor m of the u-th pixel point u The method comprises the following steps:
where β ε (0, 1) is a control parameter representing the falling factor from pixel u to v, |u-v| is the spatial distance between pixels u and v. b represents the bin, the gray value Y of the v-th pixel point v ,V(Y v B) a gray value distribution function representing the u-th pixel point, G u (b) Representing the four-way sensitive filter value of the u-th pixel at the b-th bin of the histogram H. B is an integer between 1 and dimension B.
In the present embodiment, m is calculated recursively, particularly following the methods of equation (3-2) and equation (3-3) u . Normalization factor m of the u-th pixel point u The calculation method of (2) is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,in lambda directionNormalized factor, λ, is direction, β e (0, 1) is a control parameter.
Step S24: the normalization factor of each pixel point is adopted to obtain the difference value of the four-way sensitive filter between the two local areas taking the ith pixel point and the vth pixel point as the center, and the method specifically comprises the following steps:
using the normalization factor m u Normalizing the four-way sensitive filter value of each pixel point, and normalizing the four-way sensitive filter value G of the ith pixel point and the ith pixel point on the b th slot of the histogram H u And G v The sum of the differences of the cumulative distribution histograms of the (b) is taken as the difference of the four-way sensitive filter between the two local areas centered by the (u) th pixel and the (v) th pixel.
The difference value of the four-way sensitive filter between the two local areas with the u pixel point and the v pixel point as centers can be calculated as:
in the formula (3-5),and->Respectively represent normalized four-way sensitive filter values G u And G v Is defined as: /> And->Respectively representing the normalized nth pixel point and the nth pixel point in the gray image Y v On the b-th bin of the histogram H of (2)Is a four-way sensitive filter value G u And G v Is described.
Step S25: illumination consistent features are extracted based on a four-way sensitive filter (QSF).
This is a new image transformation in which the pixel values do not change with changes in illumination.
The step S25 specifically includes the following steps:
step S251: and acquiring a transformation formula of gray values of pixel points of the image to be measured before and after affine illumination transformation.
Wherein Y is u And Y' u For the gray values of the pixel points of the image to be measured before and after affine illumination transformation,andis to carry out affine transformation E at pixel point u u (A u ) Is included in the set of parameters.
Step S252: the four-way sensitive filter value G u In section [ b ] u -r u ,b u +r u ]Integral value in as illumination coincidence feature zeta before affine illumination transformation u And calculate illumination consistent feature ζ 'after affine illumination transformation' u
Wherein the illumination consistent feature ζ prior to affine illumination transformation u The method comprises the following steps:
wherein b u For gray values Y of pixels of the image to be measured prior to affine illumination transformation u Groove where r is located u Interval of control integrationAmplitude.
If the interval amplitude r u With linear change in illumination, then new interval amplitudeSimilar to equation (3-7), the illumination consistent feature ζ 'after affine illumination transformation' u An integral value equal to the four-way sensitive filter value in the new interval:
step S26: optimizing the illumination uniformity characteristic ζ u
Further, it is assumed that the change in the illumination intensity is locally smooth, and then the amount of change in the illumination intensity is close in a local area. In this case ζ if quantization error is ignored u Just and ζ' u Equal. This means ζ u Is independent of affine illumination transformation and can be used as an illumination consistent feature (Illumination Invariant Features, IIF). This deduction is also true when integrating with equation (3-7) from the 1 st slot to the B-th slot. However, to further reduce quantization error, a soft smoothing term is introduced herein to optimize the illumination uniformity characteristic ζ u
Optimized illumination consistent characteristic ζ' u The method comprises the following steps:
wherein the interval amplitudeRecord Y u The groove is b u ,G u (b) Representing the four-way sensitive filter value of the u-th pixel at the b-th bin of the histogram H. B is an integer between 1 and dimension B.
Step S3: according to the illumination consistent characteristic zeta u And carrying out binarization processing on each pixel point of the image to be detected to obtain a plurality of connected areas so as to furthest reserve the interested part in the image.
Wherein, when binarization processing is carried out, the communication area is illumination consistent characteristic zeta u And a connected region composed of pixel points higher than a threshold value. In this embodiment, the threshold value of the binarization process is preferably 168, i.e., the illumination is consistent with the feature ζ u The pixel value of less than 168 pixels is set to 0 (black), and the illumination is consistent with the characteristic ζ u The pixel value of the pixel point of 168 or more is 255 (white).
Step S4: as shown in fig. 2A-2B, the image to be measured is subjected to open operation processing to eliminate the interference of special objects in the image, a plurality of connected areas are obtained through calculation, and the open operation result of the connected areas, which is larger than 60% of the area of the largest connected area, is reserved. The 60% threshold is obtained through experimental results, and the results show that the threshold is set to be larger than 60% of the area of the large communication area, and the effect is good.
Wherein the plurality of connected regions are calculated by adopting a breadth-first connected region calculation algorithm (BFS).
Step S5: and detecting the corner points of the parking space lines based on an image skeletonization algorithm to further remove interference, thereby realizing the identification of the parking space lines.
The image skeletonization algorithm is realized through function call.
According to the invention, the four-way sensitive filter (QSF) and the corresponding illumination consistent characteristics are provided, and are used as local descriptors to be introduced into the parking space line detection method provided by the invention, so that the method has robustness under the condition of illumination change in a scene.
Experimental results
200 parking space images under different conditions are selected for verification in an experiment, and the parking space line detection method is compared with the traditional Hough method under the same condition, and the recall rate and the recognition rate are adopted in the experiment to estimate the efficiency of the algorithm. The results shown in fig. 3A-3B indicate that the method of hough transform may detect some irrelevant lines and the hough transform is slower, and the accuracy of the method of the present invention is higher.
The foregoing description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and various modifications can be made to the above-described embodiment of the present invention. All simple, equivalent changes and modifications made in accordance with the claims and the specification of the present application fall within the scope of the patent claims. The present invention is not described in detail in the conventional art.

Claims (5)

1. The parking space line detection method based on illumination consistency is characterized by comprising the following steps of:
step S1: denoising the image to be detected by adopting filtering;
step S2: processing the denoised image to be detected by using a four-way sensitive filter, and extracting illumination consistent characteristics of each pixel point of the image to be detected;
step S3: according to the illumination consistent characteristic zeta u Performing binarization processing on each pixel point of the image to be detected to obtain a plurality of communication areas;
step S4: carrying out open operation processing on the image to be detected, simultaneously calculating to obtain a plurality of open operation results of the communication areas, and reserving the open operation results of the communication areas, which are larger than 60% of the area of the largest communication area;
the step S2 includes:
step S21: calculating components of the four-way sensitive filter value of each pixel point in the image to be detected after the de-drying treatment along 4 different directions;
the component of the four-way sensitive filter value of the u-th pixel along one direction lambda is as follows:
wherein Y is v Gray representing the v-th pixelA degree value, H represents a gray value Y of the v-th pixel point v Is a histogram of (1); b is the number of bins of the histogram H, β ε (0, 1) represents the falling factor from the u-th pixel to the v-th pixel, |u-v| is the spatial distance between the u-th pixel and the v-th pixel; v (Y) v B) represents the gray value distribution function of the v-th pixel, when the gray value Y of the v-th pixel v When belonging to the b-th groove, V (Y v The value of b) is 1, otherwise V (Y) v B) has a value of 0, B being an integer between 1 and dimension B;
step S22: according to the components of the four-way sensitive filter value of each pixel along 4 different directions, the weights of the pixels are accumulated to obtain the four-way sensitive filter value of each pixel on each slot of the histogram H in the image to be detected;
step S23: normalizing the four-way sensitive filter value of each pixel point on each slot of the histogram H to obtain a normalization factor of each pixel point;
step S24: acquiring a four-way sensitive filter difference value between two local areas with the u pixel point and the v pixel point as centers by adopting the normalization factors of the pixel points;
step S25: extracting illumination consistent characteristics;
in the step S21, a component of the four-way sensitive filter value of the u-th pixel along one of the directions λThe component of the four-way sensitive filter value of the u-th pixel along one of the directions lambda is calculated by adopting the method of integrating the histogram and based on the previous pixel>The method comprises the following steps:
wherein the method comprises the steps ofBeta epsilon (0, 1) is the reduction factor, V (Y) v B) represents a gray value distribution function of the u-th pixel, the gray value Y of the u-th pixel u When belonging to the b-th groove, V (Y u The value of b) is 1, otherwise V (Y) u The value of b) is 0,is the component of the four-way sensitive filter value on the u-1 pixel point along one of the directions lambda;
in the step S22, the value of the four-way sensitive filter of the ith pixel point on the b th bin of the histogram H is:
wherein G is u (b) Representing the four-way sensitive filter value of the u-th pixel at the b-th bin of the histogram H,is the component of the four-way sensitive filter value on the u-1 pixel point along one of the directions lambda; v (Y) u B) represents the gray value distribution function of the ith pixel, when the gray value Y of the ith pixel u When belonging to the b-th groove, V (Y u The value of b) is 1, otherwise V (Y) u The value of b) is 0;
in the step S23, the normalization factor m of the u-th pixel point u The method comprises the following steps:
wherein m is u As a normalization factor, λ is the direction, β e (0, 1) is a control parameter;
the step S24 includes: normalizing the four-way sensitive filter value of each pixel point by adopting the normalization factor, and enabling the normalized nth pixel point and the normalized nth pixel point to be in the b th pixel point of the histogram HFour-way sensitive filter value G on a slot u And G v Taking the sum of the differences of the cumulative distribution histograms as the difference of the four-way sensitive filter between the two local areas taking the ith pixel point and the v-th pixel point as the center;
the step S25 includes:
step S251: acquiring a transformation formula of gray values of pixel points of an image to be detected before and after affine illumination transformation;
step S252: the four-way sensitive filter value G u In section [ b ] u -r u ,b u +r u ]The integral value in the model is used as the illumination coincidence feature before affine illumination transformation, and the illumination coincidence feature after affine illumination transformation is calculated.
2. The lighting coincidence-based parking space line detection method according to claim 1, wherein in the step S1, the filtering adopted includes gaussian filtering and guided filtering.
3. The lighting coincidence-based parking space line detection method according to claim 1, wherein the step S2 further includes a step S26 of: a soft smoothing term is introduced to optimize the illumination uniformity characteristics.
4. The method for detecting a parking space line based on consistent illumination according to claim 1, wherein in the step S3, when binarization processing is performed, the connected region is a consistent illumination feature ζ u And a connected region composed of pixel points higher than a threshold value.
5. The lighting coincidence-based parking space line detection method according to claim 1, further comprising step S5: and detecting the corner points of the parking space lines based on an image skeletonization algorithm.
CN202010440287.5A 2020-05-22 2020-05-22 Parking space line detection method based on illumination consistency Active CN111611930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010440287.5A CN111611930B (en) 2020-05-22 2020-05-22 Parking space line detection method based on illumination consistency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010440287.5A CN111611930B (en) 2020-05-22 2020-05-22 Parking space line detection method based on illumination consistency

Publications (2)

Publication Number Publication Date
CN111611930A CN111611930A (en) 2020-09-01
CN111611930B true CN111611930B (en) 2023-10-31

Family

ID=72195937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010440287.5A Active CN111611930B (en) 2020-05-22 2020-05-22 Parking space line detection method based on illumination consistency

Country Status (1)

Country Link
CN (1) CN111611930B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417993B (en) * 2020-11-02 2021-06-08 湖北亿咖通科技有限公司 Parking space line detection method for parking area and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003001810A1 (en) * 2001-06-21 2003-01-03 Wespot Ab Invariant filters
CN103065494A (en) * 2012-04-12 2013-04-24 华南理工大学 Free parking space detection method based on computer vision
EP2759959A2 (en) * 2013-01-25 2014-07-30 Ricoh Company, Ltd. Method and system for detecting multi-lanes
CN105608429A (en) * 2015-12-21 2016-05-25 重庆大学 Differential excitation-based robust lane line detection method
CN107895151A (en) * 2017-11-23 2018-04-10 长安大学 Method for detecting lane lines based on machine vision under a kind of high light conditions
CN108229247A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of mobile vehicle detection method
CN109785354A (en) * 2018-12-20 2019-05-21 江苏大学 A kind of method for detecting parking stalls based on background illumination removal and connection region

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI700017B (en) * 2018-10-17 2020-07-21 財團法人車輛研究測試中心 Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003001810A1 (en) * 2001-06-21 2003-01-03 Wespot Ab Invariant filters
CN103065494A (en) * 2012-04-12 2013-04-24 华南理工大学 Free parking space detection method based on computer vision
EP2759959A2 (en) * 2013-01-25 2014-07-30 Ricoh Company, Ltd. Method and system for detecting multi-lanes
CN105608429A (en) * 2015-12-21 2016-05-25 重庆大学 Differential excitation-based robust lane line detection method
CN108229247A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of mobile vehicle detection method
CN107895151A (en) * 2017-11-23 2018-04-10 长安大学 Method for detecting lane lines based on machine vision under a kind of high light conditions
CN109785354A (en) * 2018-12-20 2019-05-21 江苏大学 A kind of method for detecting parking stalls based on background illumination removal and connection region

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张悦旺 ; .基于改进Hough变换的车位线识别方法.计算机工程与设计.2017,(11),全文. *
龚建伟 ; 王安帅 ; 熊光明 ; 刘伟 ; 陈慧岩 ; .一种自适应动态窗口车道线高速检测方法.北京理工大学学报.2008,(06),全文. *

Also Published As

Publication number Publication date
CN111611930A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN104408460B (en) A kind of lane detection and tracking detection method
CN103077384B (en) A kind of method and system of vehicle-logo location identification
CN104732235A (en) Vehicle detection method for eliminating night road reflective interference
WO2020220663A1 (en) Target detection method and apparatus, device, and storage medium
CN106778551B (en) Method for identifying highway section and urban road lane line
EP2168079A1 (en) Method and system for universal lane boundary detection
CN107657209B (en) Template image registration mechanism based on finger vein image quality
CN115861325B (en) Suspension spring defect detection method and system based on image data
CN116188328B (en) Parking area response lamp linked system based on thing networking
CN111062293A (en) Unmanned aerial vehicle forest flame identification method based on deep learning
CN109543686B (en) Character recognition preprocessing binarization method based on self-adaptive multi-threshold
CN115511907B (en) Scratch detection method for LED screen
CN111553214A (en) Method and system for detecting smoking behavior of driver
CN111611930B (en) Parking space line detection method based on illumination consistency
CN114913194A (en) Parallel optical flow method moving target detection method and system based on CUDA
CN111652033A (en) Lane line detection method based on OpenCV
CN107832732B (en) Lane line detection method based on treble traversal
CN107977608B (en) Method for extracting road area of highway video image
CN116563768B (en) Intelligent detection method and system for microplastic pollutants
CN113053164A (en) Parking space identification method using look-around image
Wang et al. Lane detection algorithm based on density clustering and RANSAC
CN115100510B (en) Tire wear degree identification method
CN116469061A (en) Highway obstacle detection and recognition method
CN115994870A (en) Image processing method for enhancing denoising
CN110647843B (en) Face image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant