CN108875911B - Parking space detection method - Google Patents

Parking space detection method Download PDF

Info

Publication number
CN108875911B
CN108875911B CN201810516244.3A CN201810516244A CN108875911B CN 108875911 B CN108875911 B CN 108875911B CN 201810516244 A CN201810516244 A CN 201810516244A CN 108875911 B CN108875911 B CN 108875911B
Authority
CN
China
Prior art keywords
parking space
parking
training
data set
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810516244.3A
Other languages
Chinese (zh)
Other versions
CN108875911A (en
Inventor
张�林
李曦媛
黄君豪
沈莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201810516244.3A priority Critical patent/CN108875911B/en
Publication of CN108875911A publication Critical patent/CN108875911A/en
Application granted granted Critical
Publication of CN108875911B publication Critical patent/CN108875911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

A parking space detection method includes: constructing a large-scale, labeled and around-looking image data set facing the parking space perception problem; the detection of the parking space control point is completed by improving a target detection frame YOLOv2 frame based on deep learning; and pairing the detected control points pairwise to form point pair combinations, judging distance constraints of the point pair combinations, primarily screening, designing a custom model to classify the parking space control point pairs meeting the distance constraints on the basis of a shallow layer model AlexNet of a classification model based on a deep convolutional neural network, and finishing parking space inference by judging the types of parking spaces and the direction of an entrance line. The invention is an important component of an autonomous parking system in unmanned driving, and can detect the position of a parking space only through the cameras around the vehicle body, thereby providing technical support for autonomous parking. The invention has the advantages of high detection precision, high response speed, stability and reliability.

Description

Parking space detection method
Technical Field
The invention belongs to the field of auxiliary parking of computer vision and intelligent driving, and relates to a parking space detection method, in particular to a parking space detection method based on a deep learning technology.
Background
With the pursuit of modern human beings on automation and convenient life, all trades are developed towards automation and intellectualization. Automobiles have an extremely wide market as a main vehicle of people, and automation and intellectualization of automobiles are also important subjects in the academic world and the automobile engineering world at present. The automotive industry has proposed the concept of an intelligent vehicle, striving to achieve full-automatic unmanned driving of the vehicle. As a key technology in the field of intelligent vehicles, the autonomous parking system is also becoming a popular research direction for various large automobile manufacturers. For drivers, especially beginners, the limited parking space and narrow field of view make parking a challenge not trivial.
In recent years, many enterprises and research institutions at home and abroad are engaged in developing short-distance autonomous parking systems. The workflow of a typical autonomous parking system is generally as follows. When the vehicle approaches the parking area, the vehicle is switched to the low-speed unmanned driving mode and automatically travels along a predetermined track. When operating in the unmanned mode, the vehicle may need to rely on high definition maps, GPS signals, or SLAM (simultaneous localization and mapping) technology for real-time localization. During driving, the vehicle may search for available parking spaces or attempt to identify and locate a parking space assigned to it by the parking lot management system. Once a proper parking space is detected, the vehicle is switched to an automatic parking mode, a parking path is planned, and finally the vehicle is parked at a designated position. In this system, vision-based intelligent perception of parking space is a key component in an autonomous parking system. The parking space sensing aims at automatically identifying available parking spaces marked by ground marking lines and calculating the space positions of the available parking spaces in a vehicle coordinate system so as to provide target positions for parking path planning.
The method based on detection of parking space ground markings began at the earliest with work by Xu et al (Xu et al, Vision-guided automatic tracking for smart car, 2000). The method adopts an image segmentation method, trains a neural network for segmenting the parking space marking, and has the defect that the type of the parking space cannot be judged. Jung et al propose a two-touch method (Jung et al, Uniform user interface for a semi-automatic parking slot marking recognition, 2010) capable of detecting a plurality of parking slots, but such a method too depends on manual selection, is not intelligent enough, and is not a fully automatic parking slot detection method. Du and Tan propose a self-reversing parking system (Du et al, automated reversing system based on robust path generation and improved sliding mode control, 2015) that uses a ridge detector to find the center axis of a parking space marker through the operation of several noise filtering and removing components, but this approach is also not intelligent enough. At present, a detection method for realizing full automation is a peak-pair detection method (Jung et al, park slot marks recognition for automatic park inspection system, 2006) proposed by Jung et al, an equation of a library bit line is obtained through hough space clustering, and then the library bit line is divided according to the equation to find out a T-shaped intersection point. Wang et al propose a similar method of splitting the bin bit lines in the Radon space (Wang et al, Automatic dividing based on a bird's eye view system, 2014), but both are only suitable for the preset bin bit line width. The work of Suhr and Jung (Suhr et al, Full-automatic registration of varied parking slots using a parking stall tree structure, 2013) uses Harris corner detectors to detect the corners, finds out the intersection points of the library bit lines according to the corners and judges the parking stall, and the method has the limitation that the accuracy of Harris corner detection is depended on and is not stable enough.
Disclosure of Invention
The invention aims to provide a parking space detection method based on deep learning, which is an auxiliary autonomous parking system, can detect a parking space defined by a ground identification line in a complex imaging environment, and has the advantages of high detection precision, high response speed, stability, reliability and the like.
In order to achieve the above purpose, the solution of the invention is:
a parking space detection method based on deep learning comprises the following steps: constructing a large-scale, labeled and around-looking image data set facing the parking space perception problem; and detecting the parking space control point by adopting a target detection frame. The target detection framework based on deep learning comprises R-CNN, Fast R-CNN, YOLO, SSD and YOLOv 2. In the invention, in order to improve the detection speed, a YoloV2 frame is adopted for improvement to finish the detection of the parking space control point; pairing the detected control points pairwise to form point pair combinations, judging the distance constraints of the control points, primarily screening, and classifying the control points by adopting a classification model. The deep convolutional neural network-based classification model is AlexNet, GoogleNet, ResNet, shufflonet. In the invention, because the classification category number is only 7, the selection of the deep network does not obviously improve the accuracy rate, but takes longer time, therefore, the invention takes the shallow model AlexNet as the basis, designs the self-defined model to classify the parking space control point pairs meeting the distance constraint, and finishes the parking space inference by judging the type of the parking space and the direction of the entrance line.
A parking space detection method comprises the following steps:
s1, detecting a parking space control point;
s2, classifying parking space entrance lines;
and S3, estimating the parking space.
Further, the parking space control point detection model in the step S1 is implemented by a YOLOv2 target detection framework based on deep learning, the parking space entrance line classification model in the step S2 is implemented by a custom deep convolutional neural network framework based on AlexNet, and the steps S1 and S2 are end-to-end processes, that is, only the image to be detected needs to be input into the model obtained by training, so that the result can be obtained.
The parking space control points are T-shaped or L-shaped angular points forming a parking space, and in the invention, one parking space comprises two parking space control points and two invisible angular points.
The parking space entrance line refers to a garage position line through which an automobile passes when entering a parking space, namely a virtual line segment connecting two parking space control points of the same parking space. For example, if the pair of control points forming the parking space is (P1, P2), the entrance line direction of the parking space is from P1 to P2.
Training a parking space control point detection model based on YOLOv2, comprising the following steps:
s11, preparing data, collecting a batch of all-round parking space images to construct a training data set, and carrying out manual labeling on the training data set;
s12, modifying network parameters and normalizing the size of the constraint frame;
and S13, training the data set in the step S11 to obtain a parking space control point detection model.
In consideration of the diversity of the samples, the invention adopts an equal proportion rotation method to expand the data, and performs rotation every 15 degrees on each training sample to expand the data volume to 24 times of the original data volume. When the data set is marked, the coordinates of the parking space control points and the control point pair serial numbers which can form the parking space need to be recorded.
Training a custom deep convolutional neural network parking space entrance line classification model, which comprises the following steps:
s21, preparing data, pairing the parking space control points marked out in the training data set of the step S11 in pairs respectively, screening out legal point pairs, and performing neighborhood extraction to obtain a data set formed by 7 types of data samples;
s22, designing the structure and relevant parameters of the custom deep convolutional neural network;
and S23, training the data set in the step S21 by using the deep neural network framework designed in the step S22 to obtain a parking space entrance line classification model.
The point pairs are ordered in step S21, i.e., (P1, P2) and (P2, P1) are different classes of samples. In consideration of the problem of unbalance of samples, namely uneven distribution of classes of training data, the invention adopts a SMOTE oversampling method to synthesize new samples for classes with small data volume.
The basis for screening the legal point pairs is the distance between two parking space control points.
The neighborhood extraction is to extract an image block after respectively expanding delta x and delta y pixels along the vertical direction and the parallel direction of a connecting line of a pair of legal point pairs, and the size of the image block is 48 multiplied by 192 pixels after scaling and rotation, and the connecting line of the point pairs is parallel to the horizontal plane. When the neighborhood is beyond the image, the pair of points is discarded and not processed.
After classifying the parking space entrance line, a reasonable parking space is deduced according to the following modes:
s31, when the classification result is a right-angle parking space, giving the depth of the parking space according to the priori knowledge, and deducing the positions of the remaining two parking space corner points which are not marked, wherein the depth refers to the length of the other side perpendicular to the entrance line of the parking space;
and S32, when the classification result is an oblique parking space, detecting each angle in the oblique direction by using a Gaussian line detector, finding out the direction with the highest convolution score to determine the inclination angle delta, and deducing the remaining two angular points according to the depth of the parking space.
In the aspect of data set, the data set comprises all kinds of circular images containing parking spaces under various complex imaging conditions, wherein the directions of the parking spaces comprise parallel parking spaces, vertical parking spaces and oblique parking spaces; the parking space is located in an indoor environment and an outdoor environment, wherein the outdoor environment comprises cloudy days, sunny days, cloudy and rainy days, tree and shady shelters in sunny days, water accumulated on the ground, street lamp illumination conditions and strong illumination conditions.
The 7 types of classification results in the classification model of the entrance line of the parking space are respectively as follows: (a) the right-angle parking space with the upward parking space direction; (b) the parking space is inclined upwards towards the right; (c) the parking space is inclined towards the upper left; (d) a right-angle parking space with the parking space direction facing downwards; (e) the parking space direction faces towards the left and downwards and slantways to park the vehicle; (f) the parking space is inclined downwards towards the right; (g) not constituting a parking space.
The size of the input image of the custom depth convolution neural network framework is 48 multiplied by 192 pixels, and an output layer is provided with 7 nodes which respectively correspond to 7 types of classification results output by the model. The custom deep convolutional neural network framework comprises 4 convolutional layers, 3 maximum pooling layers, 2 normalization layers and 2 full-connection layers.
Due to the adoption of the technical scheme, the invention has the following beneficial effects: the parking space detection method and device can solve the parking space detection problem under the complex imaging condition, and is high in detection precision, high in response speed, stable and reliable.
Drawings
Fig. 1(a), 1(b), and 1(c) are schematic diagrams showing positions of parking space control points, entrance lines, and separation lines in parallel parking spaces, vertical parking spaces, and diagonal parking spaces, respectively, according to the present invention, where 1 is a parking space control point, 2 is a parking space entrance line, and 3 is a parking space separation line.
FIG. 2 shows two parking space control points P in a panoramic image according to the present invention1And P2And forming local image blocks of the entrance line of the parking space.
Fig. 3(a), 3(b), 3(c), 3(d), 3(e), 3(f) and 3(g) are schematic views of entry lines of class 7 parking spaces according to the present invention, respectively.
Fig. 4 is a schematic diagram of a framework structure of the self-defined convolutional neural network classification model based on AlexNet according to the present invention.
Fig. 5(a) and 5(b) are schematic diagrams illustrating the estimation of the right-angle parking space according to the present invention.
Fig. 6(a) and 6(b) are schematic diagrams illustrating the estimation of the oblique parking space according to the present invention.
FIG. 7 is a schematic diagram of the general implementation steps described in the present invention.
Detailed Description
The invention will be further described with reference to examples of embodiments shown in the drawings.
The parking space detection method is an important component of an autonomous parking system, can detect the position information of the parking space only through the cameras around the vehicle body, and provides technical support for a decision planning layer.
The design of the invention can simultaneously meet the requirements of detecting vertical, parallel and oblique parking spaces. In parking space detection, the most obvious feature of a parking space based on visual ground line marking is a parking space control point, fig. 1(a), 1(b) and 1(c) are schematic diagrams of various types of parking space control points, an entrance line and a separation line, and it can be known from the diagrams that the parking space entrance line is composed of a pair of parking space control points. The invention obtains the entrance line of the parking space meeting the conditions by detecting the T-shaped or L-shaped parking space control points, and determines the direction of the entrance line by classifying the entrance line to deduce the parking space. The detection method comprises three stages, namely, detecting parking space control points, classifying entrance lines formed by the detected parking space control points, and deducing the position of a parking space according to classification results. The specific process is as follows:
parking space control point detection
The parking space control points, i.e. the "T-shaped" or "L-shaped" angular points that constitute the parking space, refer to local segments centered on the intersection of the entrance line and the separation line of the parking space, as shown in fig. 1(a), 1(b), and 1(c), and the circles mark the positions of the parking space control points in the parking space in the present invention. In the invention, one parking space comprises two parking space control points and two invisible angular points.
And in the aspect of data set, the collected panoramic image data is used for establishing a parking space data set. The method comprises the steps of including all kinds of circular-view images containing parking spaces under various complex imaging conditions, wherein the directions of the parking spaces comprise parallel parking spaces, vertical parking spaces and oblique parking spaces; the parking space is located in an indoor environment and an outdoor environment, wherein the outdoor environment comprises cloudy days, sunny days, cloudy and rainy days, tree and shady shelters in sunny days, water accumulated on the ground, street lamp illumination conditions and strong illumination conditions. In the present invention, the resolution of the all-around image constituting the data set is 416 × 416 pixels.
The invention adopts a YOLOv2 target detection framework based on deep learning to detect the parking space control points.
Before training the detection network, training samples need to be prepared, and positions of all parking space control points are manually marked on the constructed parking space data set. For each control point PiWith PiA fixed-size P square box centered is considered to be PiGround-route (correct data). And taking four fifths of images with labels in the data set as training samples of the parking space control point detection model, and taking the remaining one fifth of images as a test set for testing and evaluating the detection performance of the model.
In parking space detection, a trained parking space control point detection model needs to have the characteristic of unchanged rotation. To achieve this goal, the present invention expands the training set by rotating each original annotated image, generating a number of rotated samples. Specifically, for each original annotated image I, J rotated samples thereof are available
Figure BDA0001673237360000051
Each one of IjRotated by the image I
Figure BDA0001673237360000061
And (4) degree generation. Similarly, the coordinates of the parking spot control points are also rotated in the same manner. From this, if N training sample images are labeled,then JN image samples will be available to train the parking spot control point detection model.
In the aspect of network parameter setting, the constraint frames preset in the YOLOv2 framework have 5 sizes, only one type of group-channel of the training sample is fixed, and training by using the constraint frames with excessive sizes can cause excessive errors and reduce the convergence rate of the model. Therefore, in the invention, the type and size of the constraint frame of Yolov2 are modified, the type of the constraint frame is changed into one, and for the image with the training sample size of M multiplied by N pixels, the preset length and width are respectively
Figure BDA0001673237360000062
And
Figure BDA0001673237360000063
after the work is finished, model training is carried out in an off-line mode, and a model D is trained to carry out parking space control point detection.
In the detection stage, the result can be obtained only by inputting the image to be detected into the model obtained by training, and the position coordinates of the detected parking space control point are output, which is an end-to-end process.
Second, classification of entrance lines of parking spaces
After the detection of the parking space control points, K parking space control points are obtained and are paired in pairs. Let P1And P2Is a pair of points consisting of two detected control points, verification P is required1And P2Whether an effective parking space entry line can be formed.
Figure BDA0001673237360000064
Represents from P1Point of direction P2Directed line segment, | | P1P2I represents P1And P2The distance between them.
1. For valid parking space entry line candidate items, P1And P2The distance between should satisfy the following constraints.
1-
Figure BDA0001673237360000065
Is an entrance line candidate item of a parallel parking space and needs to meet t1<||P1P2||<t2
1-2
Figure BDA0001673237360000066
Is an entrance line candidate item of a vertical or oblique parking space, and needs to satisfy t3<||P1P2||<t4
Wherein, t1,t2,t3,t4Is a distance parameter set according to prior knowledge of the lengths of the entrance lines of different types of parking spaces.
And according to the steps 1-1 and 1-2, primarily screening parking space entrance line candidates meeting the constraint conditions.
2. Is provided with
Figure BDA0001673237360000067
The parking space entrance line candidate items selected in the step 1 need to be classified to judge the entrance line direction and the parking space type, and the specific process is as follows:
2-1 pairs of
Figure BDA0001673237360000068
Neighborhood extraction is performed. FIG. 2 shows two parking space control points P in a panoramic image1And P2And forming local image blocks of the entrance line of the parking space. Accordingly, a local coordinate system is established, and P is used1And P2Is the origin at
Figure BDA0001673237360000069
The x-axis, the y-axis perpendicular to the x-axis can then be uniquely determined. In this coordinate system, a rectangular region R is defined, which is symmetrical about both the x-axis and the y-axis. For region R, its side length along the x-axis is set to | | | P1P2And | | Δ x, with a side length along the y-axis of Δ y. The neighborhood extraction is the extraction region R,a region R is extracted from the panoramic image, transformed so that the x-axis is parallel to the horizontal plane, and normalized in size to w × h pixels, preferably w-48 and h-192.
2-2, training a parking space entrance line classification model of the AlexNet-based custom deep convolutional neural network.
In terms of a data set, based on the labeled panoramic image training set in the step one, a set C can be obtained, which includes all neighborhoods R defined by two parking space control points, and the C is classified into 7 classes according to the types of parking spaces, as shown in fig. 3(a), fig. 3(b), fig. 3(C), fig. 3(d), fig. 3(e), fig. 3(f), and fig. 3(g), where the 7 classes are respectively: FIG. 3(a) right angle parking space with parking space direction up; FIG. 3(b) shows the inclined parking space with the parking space oriented to the upper right; FIG. 3(c) shows the inclined parking space with the parking space oriented upward and leftward; FIG. 3(d) right angle parking space with parking space facing downward; FIG. 3(e) shows the inclined parking space with the parking space facing downward and leftward; FIG. 3(f) shows the inclined parking space with the parking space facing downward and rightward; fig. 3(g) does not constitute a parking space. In order to solve the problem of class imbalance when constructing C, namely, the number of samples of some specific classes is smaller than that of other classes, the invention adopts a method of SMOTE (Synthetic Minity Over-sampling Technique) to oversample the classes with smaller number, and the number of samples is increased. In the present invention, it is necessary to convert the color image of the training sample into a grayscale image.
In the training phase, a classification model M is trained to predict the classes of the neighborhood R extracted from the look-around image. For the classification model M, the invention is realized by adopting a custom deep convolution neural network frame based on AlexNet, as shown in FIG. 4, the frame structure of the classification model is shown, the input of the network is a gray image of 48 x 192, and the output layer has 7 nodes which respectively correspond to the 7 classes of neighborhoods R. In FIG. 4 conv denotes a convolutional layer, ReLU denotes a modified linear unit, max-pool denotes a maximum pooling layer, LRN denotes a local corresponding normalization layer, FC denotes a fully connected layer, and dropout denotes a dropped layer. For each layer, the parameters used, except for the ReLU, FC, and dropout layers, are shown in table 1:
TABLE 1
Figure BDA0001673237360000071
Figure BDA0001673237360000081
For LRN layer, use
Figure BDA0001673237360000082
Is represented at a position (x, y), calculated using a kernel i, excited
The latter output, then after the LRN layer, normalizes the input of the next layer of response
Figure BDA0001673237360000083
Comprises the following steps:
Figure BDA0001673237360000084
in the same spatial position, N is the total number of kernel of the layer, and N represents that the average value is obtained by taking N/2 kernel around the kernel.
And after the work is finished, performing model training in an off-line mode to obtain a parking space entrance line classification model M.
In the detection stage, the classification result can be output only by inputting the image to be detected into the trained model, which is an end-to-end process.
2-3, inputting the neighborhood R in the step 2-1 into a parking space entrance line model M to obtain the neighborhood R
Figure BDA0001673237360000085
The classification result of (1).
Third, parking space inference
In parking space detection, the parking space is a parallelogram represented by coordinates of four vertices. In the invention, the corner points of two non-labeled control points are out-of-viewWithin the field range, they can only be obtained by inference. According to the priori knowledge, the depth of a parking space (namely the length of a separation line) is given, and the depths of vertical parking spaces, parallel parking spaces and oblique parking spaces are d1,d2,d3。3-1P1And P2Two detected parking space control points, P1And P2The extracted neighborhood R is classified as a right-angle parking space, and the angular points P of the remaining two non-control points at the moment3And P4Can be easily calculated, as shown in fig. 5(a), the right-angle vertical parking space is located
Figure BDA0001673237360000086
Is defined as follows:
Figure BDA0001673237360000087
as shown in FIG. 5(b), the right-angle parallel parking spaces are located
Figure BDA0001673237360000088
Has the following definitions:
Figure BDA0001673237360000091
3-2P1and P2Two detected parking space control points, P1And P2The extracted neighborhood R is classified as an oblique parking space, and the angular points P of the remaining two non-control points at the moment3And P4Then a tilt angle δ is needed to calculate, and to solve this problem, the present invention uses a template-based gaussian line detector to detect at each angle of the tilt direction, finding the direction with the highest convolution score to determine the value of δ. The specific method comprises the following steps:
as shown in FIG. 6(a), a set of ideal "T-shaped" templates is prepared offline
Figure BDA0001673237360000092
Wherein theta isjIs the angle between the two straight lines of template j and L is the total number of templates. The size of each template is s × s, and zero-averaging is performed on each template. In the test phase, with P1And P2Respectively extracting s x s size image blocks I as center1And I2And I is1And I2Is about
Figure BDA0001673237360000093
Symmetrically, then the parking space tilt angle δ can be defined as:
Figure BDA0001673237360000094
wherein denotes a correlation operation.
After the inclination angle delta is calculated, the angular point P of the remaining two non-control points can be obtained3And P4. As shown in FIG. 6(b), the parking space inclined to the right is located at a position inclined to the right
Figure BDA0001673237360000095
Has the following definitions:
Figure BDA0001673237360000096
through the steps, a complete deep learning-based parking space detection method is established, and the overall implementation steps are shown in fig. 7.
The following description of the beneficial effects of the present invention is provided in conjunction with specific experiments:
experimental setup: in order to collect data of a certain scale of panoramic parking bits for analyzing and comparing the effects achieved by the invention, in an experiment, a Rongwei E50 electric automobile is used for data collection, 12165 panoramic images are collected in total, wherein 9827 is used as a training set, and 2338 are used as a test set. The experiment was run on a workstation equipped with a 2.4GHZ Intel Xeon E5-2630V3CPU, an Nvidia Titan X graphics card and a 32GB RAM, with the programming language C + +.
Experiment one: parking space control point detection and evaluation experiment:
in the present invention, parking spot control point detection is a critical step. In order to objectively evaluate the detection performance of the invention, a comparison experiment is set, and some classical target detection algorithms are realized on the data set constructed by the invention.
The reference control method included: the method "Robust real-time face detection" proposed in 2004 by p.a.viola and m.j.jones, and was experimentally designated as "VJ"; the method proposed by Triggs in 2005, "Histogram of oriented gradients for human detection", was written as "HoG + SVM" in the experiment; a method "An HOG-LBP human detector with partial encapsulation handling" proposed in 2009 by x.wang, x.han, and s.yan, in experiments denoted as "HOG + LBP"; schwarts, a.kembhavi, d.hartwood, and l.davis, a method proposed in 2009, "Human detection using partial least squares analysis", written as "PLS" in the experiments; "A performance evaluation of single and multifeature peer detection" by Schile in 2008, and in experiments, it is noted "MultiFtr"; "Seeking the string structured detector" proposed in 2013 by R.Benenson, M.Mathias, T.Tuytelaars, and L.Van Gool, and was experimentally designated as "Roerei"; p.doller, c.wojek, b.schile, and p.perona "term evaluation of the state of the art" proposed in 2012, noted "ACF + Boosting" in experiments.
The evaluation basis is to calculate the average log-miss rate (fppi) and the positioning error, and the experimental results of various comparison methods and the method proposed in the present invention on the parking space look-around data set are summarized in table 2.
TABLE 2
Figure BDA0001673237360000101
As can be seen from Table 2, the parking space control point detection method provided by the invention has detection precision far higher than that of other classical algorithms.
Experiment two: parking space detection and evaluation experiment
In order to detect the final recognition accuracy of the method for the parking space, the following evaluation indexes are adopted:
Figure BDA0001673237360000102
Figure BDA0001673237360000103
table 3 summarizes the performance of the data samples of the deep learning-based parking space detection method in different environments on the test set.
TABLE 3
Figure BDA0001673237360000111
According to experimental results, the parking space detection method based on deep learning provided by the invention has stable performance and high detection precision under various different environmental conditions.
In the aspect of calculation speed, the invention can achieve the efficiency of 43fps on the workstations of 2.4GHZ Intel Xeon E5-2630V3CPU, Nvidia Titan X display card and 32GB RAM. The invention can achieve the efficiency of 10fps on a vehicle-mounted TX2 platform and can also take the detection rate into consideration. The method can be used in actual scenes, is an innovative method in the field of autonomous parking, and can be used as a reference method for comparison.
The embodiments described above are intended to facilitate one of ordinary skill in the art in understanding and using the present invention. It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications within the scope of the present invention based on the disclosure of the present invention.

Claims (8)

1. A parking space detection method is characterized by comprising the following steps:
s1, constructing a large-scale, labeled and parking space perception problem-oriented all-around image data set; detecting parking space control points by adopting a target detection frame;
s2, pairing the detected control points pairwise to form point pair combinations, preliminarily screening the point pairs, and classifying the entrance lines of the parking spaces by adopting a classification model; the classification model adopts a self-defined model based on a deep convolutional neural network;
s3, deducing the parking space by judging the type of the parking space and the direction of an entrance line;
training a parking space control point detection model, which comprises the following steps:
s11, preparing data, collecting a batch of all-round parking space images to construct a training data set, and carrying out manual labeling on the training data set;
s12, modifying network parameters and normalizing the size of the constraint frame;
s13, training the data set in the step S11 to obtain a parking space control point detection model;
training a classification model of a parking space entrance line of a custom deep convolutional neural network, which comprises the following steps:
s21, preparing data, pairing the parking space control points marked out in the training data set of the step S11 in pairs respectively, screening out legal point pairs, and performing neighborhood extraction to obtain a data set formed by 7 types of data samples; the 7 types of data samples are respectively as follows: (a) the right-angle parking space with the upward parking space direction; (b) the parking space is inclined upwards towards the right; (c) the parking space is inclined towards the upper left; (d) a right-angle parking space with the parking space direction facing downwards; (e) the parking space direction faces towards the left and downwards and slantways to park the vehicle; (f) the parking space is inclined downwards towards the right; (g) does not form a parking space;
s22, designing a custom deep convolutional neural network structure and related parameters;
and S23, training the data set in the step S21 by using the deep neural network framework designed in the step S22 to obtain a parking space entrance line classification model.
2. A parking space detection method according to claim 1, wherein the target detection framework of step S1 is a target detection framework based on deep learning, and one of R-CNN, Fast R-CNN, YOLO, SSD, YOLOv2 target detection framework is adopted;
the classification model in step S2 is a classification model based on a deep convolutional neural network, and is one of AlexNet, GoogleNet, ResNet, and shufflonet.
3. A parking space detection method according to claim 1, wherein in the step of training the parking space control point detection model, the data is expanded by adopting an equal proportion rotation method in consideration of sample diversity, and each training sample is rotated once every 15 degrees to expand the data amount to 24 times of the original data amount;
when the data set is marked, the coordinates of the parking space control points and the control point pair serial numbers capable of forming the parking space need to be recorded.
4. A parking space detection method according to claim 1, wherein in step S21, said point pairs are ordered, i.e. (P1, P2) and (P2, P1) are different classes of samples;
considering the problem of sample unbalance, namely uneven distribution of classes of training data, a SMOTE oversampling method is adopted to synthesize a new sample for a class with a small data volume.
5. A parking space detection method according to claim 1, wherein after classifying the parking space entry line, a reasonable parking space is deduced in the following manner:
s31, when the classification result is a right-angle parking space, giving the depth of the parking space according to the priori knowledge, and deducing the positions of the remaining two parking space corner points which are not marked, wherein the depth refers to the length of the other side perpendicular to the entrance line of the parking space;
s32, when the classification result is the oblique parking space, firstly, detecting each angle in the oblique direction by using a Gaussian line detector, finding out the direction with the highest convolution score, and determining the oblique angle
Figure 574551DEST_PATH_IMAGE001
And deducing a residual angular point according to the depth of the parking space.
6. A parking space detection method according to claim 1, wherein in terms of data set, the parking space directions include a parallel parking space, a vertical parking space, and an oblique parking space; the parking space is located in an indoor environment and an outdoor environment, wherein the outdoor environment comprises cloudy days, sunny days, cloudy and rainy days, tree and shady shelters in sunny days, water accumulated on the ground, street lamp illumination conditions and strong illumination conditions.
7. A parking space detection method according to claim 1, wherein in step S21, the basis for screening the legal point pairs is the distance between two parking space control points;
the neighborhood extraction is to respectively expand delta x and delta y pixels along the vertical direction and the parallel direction of a connecting line of a pair of legal point pairs, extract an image block, and make the size of the image block be 48 multiplied by 192 pixels after scaling and rotation, and the connecting line of the point pairs is parallel to the horizontal plane; when the neighborhood is beyond the image, the pair of points is discarded and not processed.
8. A parking space detection method according to claim 1, wherein in step S22, the size of the input image of the custom depth convolutional neural network frame is 48 × 192 pixels, and the output layer has 7 nodes, which respectively correspond to the 7 classes of classification results in claim 7; the custom deep convolutional neural network framework comprises 4 convolutional layers, 3 maximum pooling layers, 2 normalization layers and 2 full-connection layers.
CN201810516244.3A 2018-05-25 2018-05-25 Parking space detection method Active CN108875911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810516244.3A CN108875911B (en) 2018-05-25 2018-05-25 Parking space detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810516244.3A CN108875911B (en) 2018-05-25 2018-05-25 Parking space detection method

Publications (2)

Publication Number Publication Date
CN108875911A CN108875911A (en) 2018-11-23
CN108875911B true CN108875911B (en) 2021-06-18

Family

ID=64333891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810516244.3A Active CN108875911B (en) 2018-05-25 2018-05-25 Parking space detection method

Country Status (1)

Country Link
CN (1) CN108875911B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260955B (en) * 2018-12-03 2021-12-28 魔门塔(苏州)科技有限公司 Parking space detection system and method adopting parking space frame lines and end points
CN109614913A (en) * 2018-12-05 2019-04-12 北京纵目安驰智能科技有限公司 A kind of oblique parking stall recognition methods, device and storage medium
CN109614914A (en) * 2018-12-05 2019-04-12 北京纵目安驰智能科技有限公司 Parking stall vertex localization method, device and storage medium
CN109583392A (en) * 2018-12-05 2019-04-05 北京纵目安驰智能科技有限公司 A kind of method for detecting parking stalls, device and storage medium
CN109635750A (en) * 2018-12-14 2019-04-16 广西师范大学 A kind of compound convolutional neural networks images of gestures recognition methods under complex background
CN109800658B (en) * 2018-12-26 2023-05-26 中汽研(天津)汽车工程研究院有限公司 Parking space type online identification and positioning system and method based on neural network
CN111376895B (en) * 2018-12-29 2022-03-25 上海汽车集团股份有限公司 Around-looking parking sensing method and device, automatic parking system and vehicle
US20200284914A1 (en) * 2019-03-05 2020-09-10 Visteon Global Technologies, Inc. Multiple vertical layer light detection and ranging system, auto-parking assistance, and computer vision lane detection and keeping
CN109977817B (en) * 2019-03-14 2021-04-27 南京邮电大学 Motor train unit bottom plate bolt fault detection method based on deep learning
CN109977862B (en) * 2019-03-26 2021-10-15 北京茵沃汽车科技有限公司 Recognition method of parking space limiter
CN110132278B (en) * 2019-05-14 2021-07-02 驭势科技(北京)有限公司 Method and device for instant positioning and mapping
CN110348297B (en) * 2019-05-31 2023-12-26 纵目科技(上海)股份有限公司 Detection method, system, terminal and storage medium for identifying stereo garage
CN110348407A (en) * 2019-07-16 2019-10-18 同济大学 One kind is parked position detecting method
CN112417926B (en) * 2019-08-22 2024-02-27 广州汽车集团股份有限公司 Parking space identification method and device, computer equipment and readable storage medium
CN110706509B (en) * 2019-10-12 2021-06-18 东软睿驰汽车技术(沈阳)有限公司 Parking space and direction angle detection method, device, equipment and medium thereof
CN110901632B (en) * 2019-11-29 2021-04-06 长城汽车股份有限公司 Automatic parking control method and device
CN111178236B (en) * 2019-12-27 2023-06-06 清华大学苏州汽车研究院(吴江) Parking space detection method based on deep learning
CN111580131B (en) * 2020-04-08 2023-07-07 西安邮电大学 Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN112654999B (en) * 2020-07-21 2022-01-28 华为技术有限公司 Method and device for determining labeling information
CN112130146B (en) * 2020-08-26 2022-05-03 南京航空航天大学 Video synthetic aperture radar moving target bright line detection method based on Radon transformation and machine learning
CN112052782B (en) * 2020-08-31 2023-09-05 安徽江淮汽车集团股份有限公司 Method, device, equipment and storage medium for recognizing parking space based on looking around
CN112193240B (en) * 2020-09-28 2022-02-01 惠州华阳通用电子有限公司 Parking method based on water accumulation information
CN112172798B (en) * 2020-09-28 2022-02-01 惠州华阳通用电子有限公司 Parking method based on water accumulation environment and storage medium
CN112356826B (en) * 2020-10-28 2022-02-01 惠州华阳通用电子有限公司 Parking assisting method and storage medium
CN112308922A (en) * 2020-11-25 2021-02-02 南京鸿昌智能货架有限公司 Deep learning-based automatic goods-searching positioning method and system for shuttle
CN112598922B (en) * 2020-12-07 2023-03-21 安徽江淮汽车集团股份有限公司 Parking space detection method, device, equipment and storage medium
CN112669615B (en) * 2020-12-09 2023-04-25 上汽大众汽车有限公司 Parking space detection method and system based on camera
CN112668588B (en) * 2020-12-29 2023-09-12 禾多科技(北京)有限公司 Parking space information generation method, device, equipment and computer readable medium
CN113011355B (en) * 2021-03-25 2022-10-11 东北林业大学 Pine wood nematode disease image recognition detection method and device
CN113240734B (en) * 2021-06-01 2024-05-17 深圳市捷顺科技实业股份有限公司 Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN113320474A (en) * 2021-07-08 2021-08-31 长沙立中汽车设计开发股份有限公司 Automatic parking method and device based on panoramic image and human-computer interaction
CN113627277A (en) * 2021-07-20 2021-11-09 的卢技术有限公司 Method and device for identifying parking space
CN113627276A (en) * 2021-07-20 2021-11-09 的卢技术有限公司 Method and device for detecting parking space
CN114155740A (en) * 2021-12-20 2022-03-08 上海高德威智能交通系统有限公司 Parking space detection method, device and equipment
CN114954479A (en) * 2022-06-01 2022-08-30 安徽蔚来智驾科技有限公司 Parking space entrance line determination method, computer equipment, storage medium and vehicle
CN115601971B (en) * 2022-11-12 2023-11-10 广州融嘉信息科技有限公司 Park self-adaptive vehicle dispatching and parking intelligent control method based on neural network
CN116310390B (en) * 2023-05-17 2023-08-18 上海仙工智能科技有限公司 Visual detection method and system for hollow target and warehouse management system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663357A (en) * 2012-03-28 2012-09-12 北京工业大学 Color characteristic-based detection algorithm for stall at parking lot
CN105975941A (en) * 2016-05-31 2016-09-28 电子科技大学 Multidirectional vehicle model detection recognition system based on deep learning
CN107886080A (en) * 2017-11-23 2018-04-06 同济大学 One kind is parked position detecting method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8041483B2 (en) * 1994-05-23 2011-10-18 Automotive Technologies International, Inc. Exterior airbag deployment techniques
US7209221B2 (en) * 1994-05-23 2007-04-24 Automotive Technologies International, Inc. Method for obtaining and displaying information about objects in a vehicular blind spot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663357A (en) * 2012-03-28 2012-09-12 北京工业大学 Color characteristic-based detection algorithm for stall at parking lot
CN105975941A (en) * 2016-05-31 2016-09-28 电子科技大学 Multidirectional vehicle model detection recognition system based on deep learning
CN107886080A (en) * 2017-11-23 2018-04-06 同济大学 One kind is parked position detecting method

Also Published As

Publication number Publication date
CN108875911A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108875911B (en) Parking space detection method
CN107341453B (en) Lane line extraction method and device
Linegar et al. Made to measure: Bespoke landmarks for 24-hour, all-weather localisation with a camera
Yu et al. Semantic alignment of LiDAR data at city scale
WO2021068588A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
Huang et al. DMPR-PS: A novel approach for parking-slot detection using directional marking-point regression
Piccioli et al. Robust road sign detection and recognition from image sequences
CN107526360A (en) The multistage independent navigation detection system of explosive-removal robot and method under a kind of circumstances not known
Steinhauser et al. Motion segmentation and scene classification from 3D LIDAR data
CN111169468A (en) Automatic parking system and method
Nourani-Vatani et al. A study of feature extraction algorithms for optical flow tracking
CN102855758A (en) Detection method for vehicle in breach of traffic rules
Zhang et al. High-precision localization using ground texture
Zováthi et al. Point cloud registration and change detection in urban environment using an onboard Lidar sensor and MLS reference data
Balaska et al. Enhancing satellite semantic maps with ground-level imagery
Mistry et al. Survey: Vision based road detection techniques
Huang et al. Vision-based semantic mapping and localization for autonomous indoor parking
CN115830359A (en) Workpiece identification and counting method based on target detection and template matching in complex scene
Roynard et al. Fast and robust segmentation and classification for change detection in urban point clouds
Yang et al. Using mobile laser scanning data for features extraction of high accuracy driving maps
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
Ramisa et al. Mobile robot localization using panoramic vision and combinations of feature region detectors
CN115526881B (en) Battery cell polarity detection method and device based on image modeling
CN116452826A (en) Coal gangue contour estimation method based on machine vision under shielding condition
CN110348407A (en) One kind is parked position detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant