CN111611930A - Parking space line detection method based on illumination consistency - Google Patents
Parking space line detection method based on illumination consistency Download PDFInfo
- Publication number
- CN111611930A CN111611930A CN202010440287.5A CN202010440287A CN111611930A CN 111611930 A CN111611930 A CN 111611930A CN 202010440287 A CN202010440287 A CN 202010440287A CN 111611930 A CN111611930 A CN 111611930A
- Authority
- CN
- China
- Prior art keywords
- pixel point
- illumination
- image
- value
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 89
- 238000001514 detection method Methods 0.000 title claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000001914 filtration Methods 0.000 claims abstract description 17
- 238000010606 normalization Methods 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 16
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 238000005315 distribution function Methods 0.000 claims description 10
- 230000001186 cumulative effect Effects 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 5
- 238000001035 drying Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 206010016275 Fear Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a parking space line detection method based on illumination consistency, which comprises the steps of denoising an image by adopting filtering; processing the image by using a four-way sensitive filter, and extracting to obtain the illumination consistency characteristic of each pixel point; performing binarization processing on each pixel point according to the illumination consistency characteristics to obtain a plurality of connected areas; and performing opening operation processing on the image, obtaining a plurality of connected region opening operation results at the same time, and reserving the connected region opening operation results which are more than 60% of the area of the maximum connected region. The parking space line detection method based on illumination consistency calculates the contribution value of the neighborhood pixels from four directions of the pixels by adopting the one-dimensional four-direction sensitive filter, thereby applying the illumination consistency characteristic to parking space line detection to remove the influence of illumination on an original image, and reserving the interested part in the image to the maximum extent by carrying out binarization processing on the image, thereby realizing the accurate reservation of the parking space line under different illumination conditions.
Description
Technical Field
The invention belongs to the field of automatic driving of vehicles, and relates to a parking space line detection method.
Background
With the rapid development of technology, the automatic driving technology has become a research hotspot in the field of artificial intelligence, the intelligent parking technology configuration becomes more and more popular, and many high-tech products are assembled on the automobile, and the automatic parking system is one of the high-tech products. The automatic parking system can improve the parking convenience to a certain extent, and particularly can effectively relieve and solve the problem of difficult parking for drivers who have fuzzy and psychological fears in parking concepts, so that the parking time is shortened, and the traffic network efficiency is further improved.
When an automatic parking system is established, there are many key problems to be solved, and how to quickly and accurately detect and locate parking spaces around a vehicle is one of them. The sensors used for detecting the parking spaces mainly comprise a visual sensor and a distance measuring sensor.
In the traditional automatic parking system scheme, the parking space sensing is mainly carried out through an ultrasonic radar, and adjacent vehicles can be easily identified. Due to the limitation of the working principle of the ultrasonic radar, namely, under the condition that no adjacent vehicle exists, an idle parking space cannot be found, the accuracy of the parking space depends on the position of the adjacent vehicle, and the adaptability to parking space scenes is poor.
In contrast, the multi-vision sensor based approach is able to identify parking spaces more accurately because its identification process does not depend on the presence of neighboring vehicles. In recent years, with the rapid increase of the demand for automatic parking systems, a plurality of parking space line detection methods based on a multi-vision sensor have been proposed, however, most of the previous methods do not consider the influence of different illumination conditions on the parking space line detection, and the parking space line detection method of the present invention is directed to solving the parking space line identification problem under different illumination conditions.
Disclosure of Invention
The invention aims to provide a parking space line detection method based on consistent illumination, which introduces the characteristic of consistent illumination into a parking space line detection system of a parking lot so as to expect to achieve higher parking space line detection accuracy.
In order to achieve the above object, the present invention provides a parking space line detection method based on illumination consistency, which includes:
s1: denoising an image to be detected by adopting filtering;
s2: processing the de-noised image to be detected by using a four-way sensitive filter, and extracting illumination consistency characteristics of all pixel points of the image to be detected;
s3: ζ according to the illumination uniformity characteristicuCarrying out binarization processing on each pixel point of an image to be detected to obtain a plurality of connected areas;
s4: and performing opening operation processing on the image to be detected, simultaneously calculating to obtain a plurality of communicating region opening operation results, and reserving the communicating region opening operation results which are more than 60% of the area of the maximum communicating region.
In step S1, the filtering used includes gaussian filtering and guided filtering.
The step S2 includes:
step S21: calculating components of the four-direction sensitive filter value of each pixel point in the image to be detected after the drying treatment along 4 different directions;
the component of the four-direction sensitive filter value of the u-th pixel point along one direction lambda is as follows:
wherein, YvExpressing the gray value of the v-th pixel point, H expressing the gray value Y of the v-th pixel pointvB is the number of slots of histogram H, β∈ (0, 1) represents the reduction factor from the u-th pixel to the v-th pixel, u-v is the th pixelThe spatial distance between the u pixel points and the v pixel point; v (Y)vB) expressing the gray value distribution function of the v-th pixel point, when the gray value Y of the v-th pixel pointvWhen it belongs to the b-th groove, V (Y)vB) is 1, otherwise V (Y)vB) has a value of 0, B being an integer between 1 and dimension B;
step S22: accumulating the weights of the pixel points according to the components of the four-way sensitive filter value of each pixel point along 4 different directions to obtain the four-way sensitive filter value of each pixel point in the image to be detected on each groove of the histogram H;
step S23: normalizing the four-way sensitive filter value of each pixel point on each slot of the histogram H to obtain a normalization factor of each pixel point;
step S24: acquiring a difference value of a four-way sensitive filter between two local areas with the u-th pixel point and the v-th pixel point as centers by adopting the normalization factor of each pixel point;
step S25: extracting illumination consistency characteristics;
in step S21, the component of the four-way sensitive filter value of the u-th pixel along one direction λAdopting an integral histogram method, and calculating the component of the four-direction sensitive filter value of the u-th pixel point along one direction lambda on the basis of the previous pixel pointComprises the following steps:
wherein β∈ (0, 1) is a reduction factor, V (Y)vB) a gray value distribution function representing the u-th pixel point, and a gray value Y of the u-th pixel pointuWhen it belongs to the b-th groove, V (Y)uB) is 1, otherwise V (Y)uThe value of b) is 0,is the component of the four-way sensitive filter value on the u-1 th pixel point along one of the directions lambda;
in step S22, the four-way sensitive filter value of the u-th pixel point in the b-th bin of the histogram H is:
wherein G isu(b) Representing the four-way sensitive filter value of the u-th pixel point at the b-th bin of the histogram H,is the component of the four-way sensitive filter value on the u-1 th pixel point along one of the directions lambda; v (Y)uB) expressing the gray value distribution function of the u-th pixel point, when the gray value Y of the u-th pixel pointuWhen it belongs to the b-th groove, V (Y)uB) is 1, otherwise V (Y)uAnd b) has a value of 0.
In step S23, the normalization factor m of the u-th pixel pointuComprises the following steps:
wherein m isuFor the normalization factor, λ is the direction, β∈ (0, 1) is a control parameter.
The step S24 includes: normalizing the four-way sensitive filter value of each pixel point by adopting the normalization factor, and enabling the normalized u-th pixel point and the normalized v-th pixel point to be in the four-way sensitive filter value G on the b-th groove of the histogram HuAnd GvThe sum of the differences of the cumulative distribution histograms is used as the difference of the four-way sensitive filter between the two local areas with the u-th pixel point and the v-th pixel point as the center.
The step S25 includes:
step S251: obtaining a transformation formula of gray values of pixel points of the image to be detected before and after affine illumination transformation;
step S252: sensing the filter value G in four directionsuIn the interval [ bu-ru,bu+ru]Integral value inside as illumination consistency feature ζ before affine illumination transformationuAnd calculating illumination consistent feature zeta 'after affine illumination transformation'u。
The step S2 further includes a step S26: introducing a soft smoothing term to optimize the illumination uniformity characteristic ζu。
In step S3, when the binarization processing is performed, the connected component is an illumination matching feature ζuAnd a connected region formed by the pixel points higher than a threshold value.
In step S4, the plurality of connected components are calculated by using a connected component calculation algorithm with a priority in breadth.
The parking space line detection method based on consistent illumination further comprises the step S5: and detecting the corner points of the line of the bicycle based on an image skeletonization algorithm.
According to the parking space line detection method based on illumination consistency, the one-dimensional four-direction sensitive filter is adopted, the contribution values of the neighborhood pixels are calculated from the four directions of the pixels, so that the illumination consistency characteristic is applied to parking space line detection, the influence of illumination on an original image is removed, binarization processing is performed on the image, the influence on the detection of a parking space line under different illumination conditions is further reduced, interested parts in the image are reserved to the maximum extent, and accurate reservation of the parking space line under different illumination conditions is realized. In addition, the method eliminates the interference of special objects in the image by adopting the operation, simultaneously calculates the area of each communication area, specifies that the area is more than 60% of the area of the maximum communication area to be reserved, and then carries out angular point detection of the parking line based on the image skeletonization algorithm and further removes the interference, thereby realizing the accurate identification of the parking line and achieving higher accuracy. In addition, the parking space line detection method based on illumination consistency firstly utilizes Gaussian filtering and guided filtering to denoise the image, so that the influence of illumination on the original image is removed. The connected domain of the invention utilizes the characteristic of consistent illumination, and can remove the miscellaneous points in the vehicle-location line.
Drawings
Fig. 1 is a flowchart of a lane detection method according to an embodiment of the present invention.
Fig. 2A is a schematic diagram of an opening operation result in step S4 of the lane line detection method shown in fig. 1.
Fig. 2B is a result diagram illustrating the parking space line detection method shown in fig. 1 after retaining the communication region opening operation result that is greater than 60% of the area of the maximum communication region in step S4.
Fig. 3A to 3B are graphs comparing results of the lane detection method of the present invention and the conventional Hough method, in which fig. 3A shows the results of the conventional Hough method, and fig. 3B shows the results of the lane detection method of the present invention.
Detailed Description
The invention discloses a parking space line detection method based on illumination consistency, which is usually arranged on an image processing unit of a vehicle and used for parking space line detection, wherein a related software system comprises the following parts: visual Studio 2015 is used as a development environment, and an OpenCV library is called to complete the implementation of the algorithm.
As shown in fig. 1, the parking space line detection method based on illumination consistency of the present invention includes the following steps:
step S1: and denoising the image to be detected by adopting filtering.
In this step S1, the employed filtering includes gaussian filtering and guided filtering.
Gaussian blur is the operation of convolving the (gray scale) image I to be measured with a gaussian kernel. The convolution formula for gaussian blur is:
Iσ=I*Gσ,
wherein denotes a convolution operation; gσIs a two-dimensional gaussian kernel with standard deviation σ, defined as:
the guiding filtering is to obtain the ith pixel in the image q to be measured by averaging all pixels in a window w centered on the ith pixel in the image p, and the formula corresponding to the guiding filtering is as follows:
Step S2: processing the de-noised image to be detected by using a four-way sensitive filter (QSF), and extracting the illumination consistency characteristics of all pixel points of the image to be detected;
a four-way Sensitive filter (QSF) is a new local feature proposed by the present invention, and the feature is based on a common local histogram, and is used to study the influence of all pixel points in four directions in an image to obtain dense features on the image. Based on the four-way sensitive filter, a new Illumination Uniformity Feature (IUF) can be further provided, so that when the Illumination component of the pixel block is changed, the Illumination uniformity Feature can be kept unchanged, and the problem of unreliability to Illumination change is effectively solved.
The step S2 specifically includes:
step S21: calculating the components of the QSF value of each pixel point in the image to be detected after the drying treatment along 4 different directions; therefore, the four-way sensitive filter (QSF) value of each pixel point in the image to be detected can be obtained according to the components of the QSF value of each pixel point along 4 different directions.
The following describes how to calculate the component of the pixel's quadriversal-sensitive filter value in one of the directions.
The component of the four-direction sensitive filter value of the u-th pixel point along one direction lambda is as follows:
h (Y) represents a one-dimensional image formed by a straight line selected along the lambda direction from the denoised image to be detected, pixel points of the one-dimensional image comprise all pixel points from the boundary of the denoised image to be detected to the u-th pixel point, and the lambda represents the direction which is the interval [1, 4 ]]One integer in (b) represents up, down, left, right directions, respectively; y isvExpressing the gray value of the v-th pixel point, H expressing the gray value Y of the v-th pixel pointvThe histogram of (1) is a vector of dimension B, B is the number of slots of histogram H, wherein a histogram can be generally represented by a column vector (a, B in the example), each value in the column vector is a bin (a, B) which is the slot, β∈ (0, 1) is a control parameter which represents a descending factor from the u-th pixel point to the V-th pixel point, | u-V | is the spatial distance between the u-th pixel point and the V-th pixel point, and V (Y)vB) expressing the gray value distribution function of the v-th pixel point, when the gray value Y of the v-th pixel pointvWhen it belongs to the b-th groove, V (Y)vB) is 1, otherwise V (Y)vB) has a value of 0, B being an integer between 1 and dimension B.
In this embodiment, the component of the four-direction sensitive filter value of the u-th pixel point along one direction λAnd (4) adopting an integral histogram method and calculating on the basis of the previous pixel point. Therefore, the component of the four-way sensitive filter value of the u-th pixel along one direction λComprises the following steps:
wherein β∈ (0, 1) is a reduction factor, V (Y)vB) a gray value distribution function representing the u-th pixel point, and a gray value Y of the u-th pixel pointuWhen it belongs to the b-th groove, V (Y)uB) is 1, otherwise V (Y)uThe value of b) is 0,is the component of the four-way sensitive filter value at the u-1 th pixel along one of the directions lambda.
Step S22: and accumulating the weights of the pixel points according to the components of the four-way sensitive filter value of each pixel point along 4 different directions to obtain the four-way sensitive filter (QSF) value of each pixel point in the image to be detected on each groove of the histogram H.
Similar to the local histogram, the component of the four-way sensitive filter value of the u-th pixel along one of the directions λCan be regarded as the integral value of the contribution values of all the pixels in the one-dimensional image. And combining the contribution values and the weighted values of all the pixel points, wherein the contribution values and the weighted values are exponentially decreased along with the increase of the spatial distance between the pixel points.
Wherein, the value of the four-way sensitive filter of the u-th pixel point on the b-th slot of the histogram H is:
wherein G isu(b) Representing the four-way sensitive filter value of the u-th pixel point at the b-th bin of the histogram H,is the component of the four-way sensitive filter value on the u-1 th pixel point along one of the directions lambda; v (Y)uB) expressing the gray value distribution function of the u-th pixel point, when the gray value Y of the u-th pixel pointuWhen it belongs to the b-th groove, V (Y)uB) is 1, otherwise V (Y)uAnd b) has a value of 0.
The vector sum of the components of the 4 directions is calculated as shown in equation (3-3). Since each QSF component takes into account pixel u itself, it is necessary to subtract 3. V (Y) when performing the accumulation calculationvAnd b). The QSF is calculated according to the gray value of the pixel point of the image, the image needs to be preprocessed before the OSF is calculated, and the method is to convert the image after denoising processing into the gray image and then calculate the QSF on the basis.
Step S23: and carrying out normalization processing on the four-way sensitive filter value of each pixel point on each slot of the histogram H to obtain a normalization factor of each pixel point.
Since the histogram H usually needs to be normalized, a summation operation is performed on all bins to obtain the normalization factor of each pixel.
Wherein, the normalization factor m of the u-th pixel pointuComprises the following steps:
where β∈ (0, 1) is a control parameter representing the dropping factor from pixel u to v, | u-v | is the spatial distance between pixels u and v, b represents the bin (bin), the gray value Y of the v-th pixelv,V(YvB) a gray value distribution function representing the u-th pixel point, Gu(b) The four-way sensitive filter value of the u-th pixel point on the b-th slot of the histogram H is represented. B is an integer between 1 and dimension B.
In the present embodiment, m is recursively calculated in particular following the methods of formula (3-2) and formula (3-3)u. Normalization factor m of u-th pixel pointuThe calculation method of (2) is as follows:
wherein the content of the first and second substances,is a normalization factor in the direction of λ, λ being the direction, β∈ (0, 1) is a control parameter.
Step S24: and acquiring a difference value of a four-way sensitive filter between two local areas with the u-th pixel point and the v-th pixel point as centers by adopting the normalization factor of each pixel point, wherein the method specifically comprises the following steps:
using said normalization factor muNormalizing the four-direction sensitive filter value of each pixel point, and enabling the normalized u-th pixel point and the normalized v-th pixel point to be in the four-direction sensitive filter value G of the b-th groove of the histogram HuAnd GvThe sum of the differences of the cumulative distribution histograms is used as the difference of the four-way sensitive filter between the two local areas with the u-th pixel point and the v-th pixel point as the center.
The difference value of the four-way sensitive filter between the two local areas respectively centered on the u-th pixel point and the v-th pixel point can be calculated as follows:
in the formula (3-5),andrespectively representing normalized four-way sensitive filter values GuAnd GvThe cumulative distribution histogram of (a), defined as: andrespectively expressing the normalized u-th pixel point and the normalized v-th pixel point in the gray level image YvIs filtered with four-way sensitivity on the b-th bin of histogram HDevice value GuAnd GvThe cumulative distribution histogram of (1).
Step S25: extracting illumination consistent features based on a four-way sensitive filter (QSF).
This is a new image transformation in which the pixel values do not change with the change in illumination.
The step S25 specifically includes the following steps:
step S251: and obtaining a transformation formula of gray values of pixel points of the image to be detected before and after the affine illumination transformation.
Wherein, YuAnd Y'uThe gray values of the pixel points of the image to be measured before and after the affine illumination transformation,andaffine transformation E is carried out on the pixel point uu(Au) Two parameters of (2).
Step S252: sensing the filter value G in four directionsuIn the interval [ bu-ru,bu+ru]Integral value inside as illumination consistency feature ζ before affine illumination transformationuAnd calculating illumination consistent feature zeta 'after affine illumination transformation'u。
Wherein the illumination uniformity characteristic ζ before affine illumination transformationuComprises the following steps:
wherein, buGray value Y of pixel point of image to be measured before affine illumination transformationuIn the groove, ruThe interval amplitude of the integration is controlled.
If the interval amplitude ruLinearly changing with the illumination, then the new interval amplitudeSimilar to equation (3-7), then illumination consistent feature ζ 'after affine illumination transformation'uIntegral equal to the four-way sensitive filter value over the following new interval:
step S26: optimizing the illumination uniformity characteristic ζu;
Further, with the assumption that the variation of the illumination intensity is locally smooth, the variation of the illumination intensity is close in a local area. In this case, if the quantization error is ignored, ζuJust and ζ'uAre equal. This means that ζuIs affine-free Illumination transformation and can be used as an Illumination consistency feature (IIF). This inference is also true when integrating with equations (3-7) from the 1 st bin to the B th bin. However, to further reduce quantization error, a soft smoothing term is introduced to optimize the illumination uniformity characteristic ζu。
Optimized illumination consistent feature ζ'uComprises the following steps:
wherein the interval amplitudeRemember YuThe groove is bu,Gu(b) The four-way sensitive filter value of the u-th pixel point on the b-th slot of the histogram H is represented. B is an integer between 1 and dimension B.
Step S3: ζ according to the illumination uniformity characteristicuCarrying out binarization processing on each pixel point of the image to be detected to obtainTo a plurality of connected regions to maximally preserve the interesting parts of the image.
Wherein, when the binarization processing is carried out, the connected region is the illumination consistency characteristic zetauAnd a connected region formed by the pixel points higher than a threshold value. In the present embodiment, the threshold value of the binarization processing is preferably 168, that is, the illumination matching feature ζ is setuThe pixel value smaller than 168 pixel point is set to 0 (black), and the illumination consistency characteristic zeta is setuThe pixel value of a pixel point equal to or larger than 168 is 255 (white).
Step S4: as shown in fig. 2A-2B, the opening operation processing is performed on the image to be measured to eliminate the interference of the special object in the image, and a plurality of connected regions are obtained by calculation, and the opening operation result of the connected region larger than 60% of the area of the largest connected region is retained. The 60% threshold is obtained through experimental results, and the results show that the threshold is set to be more than 60% of the area of the large communication area, so that the effect is better.
The plurality of connected regions are obtained by calculation through a breadth-first connected region calculation algorithm (BFS).
Step S5: and detecting the corner points of the vehicle line based on an image skeletonization algorithm to further remove interference, thereby realizing the identification of the vehicle line.
The image skeletonization algorithm is realized by function call.
The invention provides a four-way sensitive filter (QSF) and corresponding illumination consistency characteristics thereof, and introduces the QSF and the corresponding illumination consistency characteristics as local descriptors into the parking space line detection method provided by the invention, so that the QSF and the corresponding illumination consistency characteristics have robustness under the condition of illumination change in a scene.
Results of the experiment
The experiment selects 200 parking space images under different conditions to verify the algorithm, the parking space line detection method is compared with the traditional Hough method under the same condition, and the efficiency of the algorithm is estimated by adopting the recall rate and the recognition rate in the experiment. The results shown in fig. 3A-3B indicate that the hough transform method may detect some irrelevant lines, and the hough transform speed is slow, and the accuracy of the method of the present invention is higher.
The above embodiments are merely preferred embodiments of the present invention, which are not intended to limit the scope of the present invention, and various changes may be made in the above embodiments of the present invention. All simple and equivalent changes and modifications made according to the claims and the content of the specification of the present application fall within the scope of the claims of the present patent application. The invention has not been described in detail in order to avoid obscuring the invention.
Claims (10)
1. The utility model provides a parking stall line detection method based on illumination is unanimous which characterized in that includes:
step S1: denoising an image to be detected by adopting filtering;
step S2: processing the de-noised image to be detected by using a four-way sensitive filter, and extracting illumination consistency characteristics of all pixel points of the image to be detected;
step S3: ζ according to the illumination uniformity characteristicuCarrying out binarization processing on each pixel point of an image to be detected to obtain a plurality of connected areas;
step S4: and performing opening operation processing on the image to be detected, simultaneously calculating to obtain a plurality of communicating region opening operation results, and reserving the communicating region opening operation results which are more than 60% of the area of the maximum communicating region.
2. The method for detecting a parking space line based on illumination coincidence as claimed in claim 1, wherein in the step S1, the adopted filtering includes gaussian filtering and guided filtering.
3. The parking space line detection method based on illumination coincidence as claimed in claim 1, wherein the step S2 comprises:
step S21: calculating components of the four-direction sensitive filter value of each pixel point in the image to be detected after the drying treatment along 4 different directions;
the component of the four-direction sensitive filter value of the u-th pixel point along one direction lambda is as follows:
wherein, YvExpressing the gray value of the v-th pixel point, H expressing the gray value Y of the v-th pixel pointvB is the number of slots of histogram H, β∈ (0, 1) represents the reduction factor from the u-th pixel to the V-th pixel, | u-V | is the spatial distance between the u-th pixel and the V-th pixel, and V (Y)vB) expressing the gray value distribution function of the v-th pixel point, when the gray value Y of the v-th pixel pointvWhen it belongs to the b-th groove, V (Y)vB) is 1, otherwise V (Y)vB) has a value of 0, B being an integer between 1 and dimension B;
step S22: accumulating the weights of the pixel points according to the components of the four-way sensitive filter value of each pixel point along 4 different directions to obtain the four-way sensitive filter value of each pixel point in the image to be detected on each groove of the histogram H;
step S23: normalizing the four-way sensitive filter value of each pixel point on each slot of the histogram H to obtain a normalization factor of each pixel point;
step S24: acquiring a difference value of a four-way sensitive filter between two local areas with the u-th pixel point and the v-th pixel point as centers by adopting the normalization factor of each pixel point;
step S25: and extracting illumination consistency characteristics.
4. The method according to claim 3, wherein in step S21, the component of the four-way sensitive filter value of the u-th pixel along one direction λThe method of integral histogram is adopted, and the division of the four-direction sensitive filter value of the u-th pixel point along one direction lambda is calculated on the basis of the previous pixel pointMeasurement ofComprises the following steps:
wherein β∈ (0, 1) is a reduction factor, V (Y)vB) a gray value distribution function representing the u-th pixel point, and a gray value Y of the u-th pixel pointuWhen it belongs to the b-th groove, V (Y)uB) is 1, otherwise V (Y)uThe value of b) is 0,is the component of the four-way sensitive filter value on the u-1 th pixel point along one of the directions lambda;
in step S22, the four-way sensitive filter value of the u-th pixel point in the b-th bin of the histogram H is:
wherein G isu(b) Representing the four-way sensitive filter value of the u-th pixel point at the b-th bin of the histogram H,is the component of the four-way sensitive filter value on the u-1 th pixel point along one of the directions lambda; v (Y)uB) expressing the gray value distribution function of the u-th pixel point, when the gray value Y of the u-th pixel pointuWhen it belongs to the b-th groove, V (Y)uB) is 1, otherwise V (Y)uAnd b) has a value of 0.
5. The parking space line detection method based on illumination uniformity as recited in claim 3, wherein in said step S23, the normalization factor m of the u-th pixel pointuComprises the following steps:
wherein m isuFor the normalization factor, λ is the direction, β∈ (0, 1) is a control parameter.
6. The parking space line detection method based on illumination coincidence as claimed in claim 3, wherein the step S24 comprises: normalizing the four-way sensitive filter value of each pixel point by adopting the normalization factor, and enabling the normalized u-th pixel point and the normalized v-th pixel point to be in the four-way sensitive filter value G on the b-th groove of the histogram HuAnd GvThe sum of the differences of the cumulative distribution histograms is used as the difference of the four-way sensitive filter between the two local areas with the u-th pixel point and the v-th pixel point as the center.
7. The parking space line detection method based on illumination coincidence as claimed in claim 3, wherein the step S25 comprises:
step S251: obtaining a transformation formula of gray values of pixel points of the image to be detected before and after affine illumination transformation;
step S252: sensing the filter value G in four directionsuIn the interval [ bu-ru,bu+ru]The integral value in the affine illumination transformation is used as an illumination consistency characteristic before the affine illumination transformation, and the illumination consistency characteristic after the affine illumination transformation is calculated.
8. The parking space line detection method based on illumination coincidence as claimed in claim 3, wherein the step S2 further comprises the step S26: a soft smoothing term is introduced to optimize the illumination uniformity feature.
9. The parking space line detection method based on illumination uniformity as claimed in claim 1, wherein in step S3, the connected region is an illumination uniformity characteristic ζ when binarization processing is performeduComprising pixels above a thresholdA connected region.
10. The parking space line detection method based on illumination coincidence as claimed in claim 1, further comprising step S5: and detecting the corner points of the line of the bicycle based on an image skeletonization algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010440287.5A CN111611930B (en) | 2020-05-22 | 2020-05-22 | Parking space line detection method based on illumination consistency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010440287.5A CN111611930B (en) | 2020-05-22 | 2020-05-22 | Parking space line detection method based on illumination consistency |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111611930A true CN111611930A (en) | 2020-09-01 |
CN111611930B CN111611930B (en) | 2023-10-31 |
Family
ID=72195937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010440287.5A Active CN111611930B (en) | 2020-05-22 | 2020-05-22 | Parking space line detection method based on illumination consistency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111611930B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417993A (en) * | 2020-11-02 | 2021-02-26 | 湖北亿咖通科技有限公司 | Parking space line detection method for parking area and computer equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003001810A1 (en) * | 2001-06-21 | 2003-01-03 | Wespot Ab | Invariant filters |
CN103065494A (en) * | 2012-04-12 | 2013-04-24 | 华南理工大学 | Free parking space detection method based on computer vision |
EP2759959A2 (en) * | 2013-01-25 | 2014-07-30 | Ricoh Company, Ltd. | Method and system for detecting multi-lanes |
CN105608429A (en) * | 2015-12-21 | 2016-05-25 | 重庆大学 | Differential excitation-based robust lane line detection method |
CN107895151A (en) * | 2017-11-23 | 2018-04-10 | 长安大学 | Method for detecting lane lines based on machine vision under a kind of high light conditions |
CN108229247A (en) * | 2016-12-14 | 2018-06-29 | 贵港市瑞成科技有限公司 | A kind of mobile vehicle detection method |
CN109785354A (en) * | 2018-12-20 | 2019-05-21 | 江苏大学 | A kind of method for detecting parking stalls based on background illumination removal and connection region |
US20200125869A1 (en) * | 2018-10-17 | 2020-04-23 | Automotive Research & Testing Center | Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof |
-
2020
- 2020-05-22 CN CN202010440287.5A patent/CN111611930B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003001810A1 (en) * | 2001-06-21 | 2003-01-03 | Wespot Ab | Invariant filters |
CN103065494A (en) * | 2012-04-12 | 2013-04-24 | 华南理工大学 | Free parking space detection method based on computer vision |
EP2759959A2 (en) * | 2013-01-25 | 2014-07-30 | Ricoh Company, Ltd. | Method and system for detecting multi-lanes |
CN105608429A (en) * | 2015-12-21 | 2016-05-25 | 重庆大学 | Differential excitation-based robust lane line detection method |
CN108229247A (en) * | 2016-12-14 | 2018-06-29 | 贵港市瑞成科技有限公司 | A kind of mobile vehicle detection method |
CN107895151A (en) * | 2017-11-23 | 2018-04-10 | 长安大学 | Method for detecting lane lines based on machine vision under a kind of high light conditions |
US20200125869A1 (en) * | 2018-10-17 | 2020-04-23 | Automotive Research & Testing Center | Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof |
CN109785354A (en) * | 2018-12-20 | 2019-05-21 | 江苏大学 | A kind of method for detecting parking stalls based on background illumination removal and connection region |
Non-Patent Citations (2)
Title |
---|
张悦旺;: "基于改进Hough变换的车位线识别方法" * |
龚建伟;王安帅;熊光明;刘伟;陈慧岩;: "一种自适应动态窗口车道线高速检测方法" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417993A (en) * | 2020-11-02 | 2021-02-26 | 湖北亿咖通科技有限公司 | Parking space line detection method for parking area and computer equipment |
CN112417993B (en) * | 2020-11-02 | 2021-06-08 | 湖北亿咖通科技有限公司 | Parking space line detection method for parking area and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111611930B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107330376B (en) | Lane line identification method and system | |
CN103077384B (en) | A kind of method and system of vehicle-logo location identification | |
Chen et al. | Visual depth guided color image rain streaks removal using sparse coding | |
Guo et al. | License plate localization and character segmentation with feedback self-learning and hybrid binarization techniques | |
CN106778551B (en) | Method for identifying highway section and urban road lane line | |
CN107944403B (en) | Method and device for detecting pedestrian attribute in image | |
CN114926436A (en) | Defect detection method for periodic pattern fabric | |
CN113177467A (en) | Flame identification method, system, device and medium | |
Babbar et al. | A new approach for vehicle number plate detection | |
CN109858438A (en) | A kind of method for detecting lane lines based on models fitting | |
CN114913194A (en) | Parallel optical flow method moving target detection method and system based on CUDA | |
CN117474029B (en) | AI polarization enhancement chart code wave frequency acquisition imaging identification method based on block chain | |
FAN et al. | Robust lane detection and tracking based on machine vision | |
CN111611930A (en) | Parking space line detection method based on illumination consistency | |
CN113205494B (en) | Infrared small target detection method and system based on adaptive scale image block weighting difference measurement | |
CN111028263A (en) | Moving object segmentation method and system based on optical flow color clustering | |
CN113053164A (en) | Parking space identification method using look-around image | |
CN116563768B (en) | Intelligent detection method and system for microplastic pollutants | |
CN114581658A (en) | Target detection method and device based on computer vision | |
CN115994870B (en) | Image processing method for enhancing denoising | |
CN114373147A (en) | Detection method for low-texture video license plate | |
CN113505811A (en) | Machine vision imaging method for hub production | |
CN111046726B (en) | Underwater sea cucumber identification and positioning method based on AI intelligent vision | |
CN113052833A (en) | Non-vision field imaging method based on infrared thermal radiation | |
Mapurisa et al. | Improved edge detection for satellite images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |