CN113822156B - Parking space detection processing method and device, electronic equipment and storage medium - Google Patents

Parking space detection processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113822156B
CN113822156B CN202110929421.2A CN202110929421A CN113822156B CN 113822156 B CN113822156 B CN 113822156B CN 202110929421 A CN202110929421 A CN 202110929421A CN 113822156 B CN113822156 B CN 113822156B
Authority
CN
China
Prior art keywords
parking space
side line
point
parking
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110929421.2A
Other languages
Chinese (zh)
Other versions
CN113822156A (en
Inventor
安玉宾
陈黎明
张乃天
顾涵彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yihang Yuanzhi Technology Co Ltd
Original Assignee
Beijing Yihang Yuanzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yihang Yuanzhi Technology Co Ltd filed Critical Beijing Yihang Yuanzhi Technology Co Ltd
Priority to CN202110929421.2A priority Critical patent/CN113822156B/en
Publication of CN113822156A publication Critical patent/CN113822156A/en
Application granted granted Critical
Publication of CN113822156B publication Critical patent/CN113822156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The utility model provides a parking stall detection processing method, including: acquiring characteristic information of at least one parking space based on the parking space image; acquiring a missing inspection corner to update the characteristic information; carrying out first screening on the parking space side line points based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, carrying out second screening on the parking space side line points based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions so as to update characteristic information; and acquiring two vertex positions of a parking space entrance line of the parking space based on the parking space corner pairing information so as to acquire the direction of the parking space entrance line, and acquiring the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space sideline so as to update the characteristic information. The disclosure also provides a parking space detection processing device, an electronic device and a readable storage medium.

Description

Parking space detection processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of parking space information acquisition technologies, and in particular, to a parking space detection processing method and apparatus, an electronic device, and a storage medium.
Background
The parking space detection is mainly used for an automatic driving system to obtain parking space information in the surrounding environment of a vehicle, is an important component of an automatic parking system, and is also a research hotspot in the field of current automatic driving. The current mainstream parking space detection is mostly based on image parking space detection. The images for acquiring the parking spaces are images acquired by fisheye cameras, wherein the images are divided into two categories, one category is directly used for detecting the parking spaces based on the images of the fisheye cameras, the other category is used for splicing the images acquired by the four fisheye cameras at the front, the back, the left and the right of the vehicle to obtain top views, and then the parking spaces are detected based on the top views obtained by splicing. The method directly based on the fisheye camera image can generate errors under the influence of distortion, the detection method based on the spliced image has high requirements on image splicing quality, and the spliced image has the problems of artifact, deformation, blurring and the like caused by inaccurate internal reference and external reference of the camera. Besides the problems caused by image quality, parking space detection also faces the challenges of variable scenes, various parking space types, variable poses in the driving process of vehicles, vehicle sheltering from parking spaces, fuzzy vehicle line and the like. Therefore, how to accurately acquire detailed and accurate parking space information from the image has important significance and application value.
The method mainly comprises the following steps of firstly detecting the angular points and the frame characteristics of the parking spaces to obtain the positions of the parking spaces in an image coordinate system, and then classifying the parking spaces by a template matching method to obtain the type angles and the occupation information of the parking spaces, wherein the method is mainly based on the traditional image methods, such as image binarization, Hough transformation and the like; the other method is characterized in that specific information of the parking space in an image is labeled firstly, then the labeled data is used for training a deep neural network, the image to be tested is directly input into the network when the method is used, and then the specific information of the parking space is given out by the network.
The technical scheme 1: a visual parking space detection method and device is disclosed in patent No. CN109685000A, and relates to a visual parking space detection method. The method provides a set of parking space detection algorithm based on parking space frames and parking space angular points, can effectively detect the parking space angular points and the specific positions of the parking space frames, but has the problems of false detection, missed detection and the like caused by large detection errors when the parking space angular points are blocked or partial parking space lines are blurred, and has poor robustness.
The technical scheme 2 is as follows: a method, a device, equipment, a storage medium and a vehicle for detecting parking spaces with a patent number CN110969655A relates to a method for acquiring parking space information from monocular camera images, which mainly comprises the following three steps: 1) acquiring an input image showing a parking space to be detected; 2) detecting an angular point and a parking space line of a parking space based on an input image; 3) and correcting the position of the detected angular point based on the detected parking space line. The method is an algorithm for acquiring the positions of the sidelines and the angular points based on deep learning and correcting the positions of the angular points through the sidelines, and the specific positions of the empty parking spaces can be simply and effectively acquired from the monocular image. However, the method does not consider the problem of view range, and the view range is far smaller than the parking space detection based on the top view image based on the parking space detection based on the monocular image, and like the technical scheme 1, the method does not consider the problem of blocking, when the parking space angular point is blocked, the detection efficiency is obviously insufficient, when the parking space side line is blocked, the subsequent angular point cannot be corrected, and in addition, the detection effect on the parking space occupied by the vehicle is obviously insufficient.
Technical scheme 3: a method, a device and a storage medium for identifying an inclined parking space with the patent number of CN109614913A relates to an identification method of the inclined parking space, and mainly comprises the following steps: acquiring a sample image, wherein the sample image comprises one or more inclined parking spaces, identifying the top points of the parking spaces in the sample image by using a parking space top point positioning network, pairing every two of the top points of the parking spaces, intercepting image blocks of the successfully paired parts of the top points of the parking spaces, and inputting the image blocks into a parking space classification network to identify the inclined parking spaces; marking out the identified inclined parking spaces: identifying the parking space vertex in the sample image by using a parking space vertex positioning network, intercepting an image block by taking the parking space vertex as a center, extracting a slope line in a vehicle line, carrying out denoising treatment, and obtaining a slope of the slope line by line fitting; and marking out the inclined parking space by using the peak of the parking space and the slope of the inclined line. The method realizes the correction of the inclined parking space inclination angle through deep learning and the fitting of the parking space line. However, it is undeniable that the method has no way to identify whether the parking space is occupied, and if the angular point is blocked, the identification rate of the method is obviously insufficient. In addition, if the vehicle line is fuzzy or long-distance discontinuity occurs, the method for calculating the slope of the inclined parking space is not applicable any more. In addition, the method does not integrate the detection network and the classification network, but carries out the detection network and the classification network in two steps, and errors are easy to generate.
The technical scheme 4 is as follows: an End-to-End convertible One-Stage Parking stall Detection Integrating Global and Local Information thesis proposes an End-to-End Trainable Parking stall Detection algorithm. The information extraction mode of the method is redundant, wherein the global angular point is calculated through all points inside the parking space, the calculated amount is increased, the algorithm efficiency is reduced, the method does not consider that the deviation of the parking space side line is caused due to inevitable distortion generated by image splicing, the angle and the type of the parking space which are given by a network are inaccurate due to the limitation of the field of experience of a convolution kernel, and in addition, the problem of the mismatching of the angular point of the parking space is easy to occur if the angular point of the parking space is shielded.
Technical scheme 1, 2 utilize the associated information between parking stall angular point and the sideline to carry out the parking stall and detect, solved some parking stall location inaccurate, the angular point detects the error problem that the skew leads to, but admittedly, two kinds of schemes do not give the detection method of occupied parking stall, do not give concrete solution to the angular point that is sheltered from, sheltered from or fuzzy sideline yet, produce the distortion to the image and lead to the parking stall shape nonstandard, do not also propose the solution. The technical scheme 3 realizes the calculation of the angle of the inclined parking space by utilizing the accuracy of the slope of the side line and correcting the slope of the diagonal point of the side line, but the problem that the angle point or the side line is blocked is not considered, the detection precision is greatly reduced by directly abandoning the part of the parking space, the solution is too redundant, and the error is easily enlarged by two network connection steps. In technical scheme 4, encode and decode parking stall information through to degree of depth neural network, realized the parking stall position, the parking stall type, whether occupy, and the detection of parking stall direction, however, with technical scheme 1, 2, 3, it is fuzzy when its angular point is sheltered from the angular point, or the angular point is not detected, its detection effect is not ideal, and this scheme only returns from the angular point and obtains parking stall direction angle, but because its numerical value itself of the wild restriction of convolution kernel perception is inaccurate, its detection effect can reduce by a wide margin when the parking stall of irregular shape such as the parking stall that the sideline is not parallel appears.
Disclosure of Invention
In order to solve at least one of the above technical problems, the present disclosure provides a parking space detection processing method, device, electronic device, and storage medium.
The parking space detection processing method can accurately position the parking space from the spliced top view, acquire parking space information, and can be applied to an automatic parking system as a perception algorithm. The method preferably uses a convolutional neural network to detect the parking spaces in the top view spliced by the panoramic images, wherein the detection comprises the angular point positions and directions of the parking spaces, the positions of the side line points of the parking spaces, whether the parking spaces occupy and the like, and then the information of the side line points of the parking spaces and the angular point information of the parking spaces is fused, so that the information of the positions, the directions and the types of the parking spaces in the top view and whether the parking spaces occupy is obtained, and the information is used for subsequently assisting the automatic driving vehicle to park.
The parking space detection processing method, the parking space detection processing device, the electronic equipment and the storage medium are realized through the following technical scheme.
According to an aspect of the present disclosure, a parking space detection processing method is provided, including:
s100, acquiring characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
s200, acquiring a parking space side line based on the position of the parking space side line, and acquiring a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
s300, carrying out first screening on the parking space side line points based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, carrying out second screening on the parking space side line points based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions based on the directions of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update the characteristic information; and the number of the first and second groups,
s400, obtaining two vertex positions of a parking space entrance line of a parking space based on the parking space corner pairing information so as to obtain the direction of the parking space entrance line, and obtaining the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space side line so as to update the characteristic information.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the characteristic information further includes a parking space occupation state.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the method for obtaining the characteristic information of at least one parking space based on the parking space image includes:
and extracting the characteristic information of the parking space image of the parking space by using a deep neural network so as to obtain the characteristic information of the parking space.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the deep neural network is a trained deep neural network.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the deep neural network is trained by the following steps:
preprocessing each parking space image in the training data to obtain a parking space label of each parking space image, wherein the parking space label comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction, a parking space side line position, a parking space occupation state and a parking space type;
carrying out coding processing on the parking space labels of the parking space images; and (c) a second step of,
and training the deep neural network at least based on the parking stall labels of the parking stall images after the coding processing so as to obtain the trained deep neural network.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the encoding processing of the parking space tags of each parking space image includes:
and the thermodynamic diagrams based on the parking space angular points of the parking space images encode the parking space angular point pairing information, and encode the parking space angular point positions and the parking space angular point directions.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the encoding processing of the parking space tags of each parking space image includes:
and coding the position of the parking space side line based on the center point of the grid passed by the parking space side line of each parking space image relative to the drop foot of the parking space side line.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the encoding processing of the parking space tags of each parking space image includes:
and (3) Encoding the parking space occupation state of each parking space image by using One-Hot Encoding (One-Hot Encoding).
According to the parking space detection processing method of at least one embodiment of the present disclosure, acquiring a parking space sideline based on the position of the parking space sideline includes:
clustering the side line points of the position of the side line points of the parking places to obtain the side line points belonging to the same parking place side line; and the number of the first and second groups,
and performing linear regression on sideline points belonging to the same sideline to obtain a parking space sideline.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the obtaining of the missing detection angular point based on the position relationship between the parking space side line and the parking space angular point includes:
s221, acquiring a first straight line formed by every three parking space angular points for the parking space angular points of all parking space angular point positions in the characteristic information;
s222, calculating the distance from each edge line vertex of the two edge line vertexes of each parking space edge line to the first straight line;
s223, if the distance from a certain edge vertex of the parking space edge to the first straight line is smaller than or equal to a first threshold distance, executing a step S224, otherwise executing a step S225;
s224, taking the vertex of the side line with the distance of the first straight line less than or equal to the first threshold distance as a missing detection angular point, and adding the missing detection angular point into the parking space angular point;
s225, calculating the distance between each edge vertex of the parking space edge and each parking space corner, if the distance is smaller than or equal to a second threshold distance, executing the step S226, otherwise, discarding the parking space edge;
s226, calculating the included angle between the straight line formed by the two side line vertexes of the parking space side line and the parking space angular point and the reference direction, and updating the angle of the parking space side line relative to the parking space angular point by using the included angle.
According to the parking space detection processing method of at least one embodiment of this disclosure, based on the parking space angular point position after the update and parking space angular point direction to the parking space sideline point carry out the screening for the first time, get rid of first abnormal type sideline point, include:
and taking the position of the parking space angular point as a center, taking a straight line of the parking space angular point direction as a baseline, respectively rotating the two sides of the baseline by preset angles to obtain two solid lines, and taking the side line points positioned on the outer sides of the two solid lines as first abnormal type side line points to be removed.
According to the parking space detection processing method of at least one embodiment of this disclosure, the second screening is performed on the parking space edge points based on the position of the parking space edge points after the first abnormal type edge points are removed, so as to remove the second abnormal type edge points, including:
and screening the parking space side line points for the second time based on the position dispersion degree of the parking space side line points after the first abnormal type side line points are removed so as to remove the second abnormal type side line points.
According to the parking space detection processing method of at least one embodiment of this disclosure, the second screening is performed on the parking space edge points based on the position of the parking space edge points after the first abnormal type edge points are removed, so as to remove the second abnormal type edge points, including:
and respectively calculating the number of the parking space side line points contained in a circular area with a preset radius by taking each parking space side line point as a center, if the number exceeds the threshold number, setting the circular area as a fuzzy area, and taking the side line points in the fuzzy area as second abnormal type side line points and removing the second abnormal type side line points.
According to the parking space detection processing method of at least one embodiment of the present disclosure, obtaining two vertex positions of a parking space entrance line of a parking space based on the parking space corner pairing information to obtain a direction of the parking space entrance line includes:
s401, calculating the distance between each paired angular point and each unpaired angular point of the parking space;
s402, if the distance between a paired corner point and an unpaired corner point is smaller than a third threshold distance, judging that the paired corner point is matched with the unpaired corner point, and if two corner points of a certain group of paired corner points both have an unpaired corner point matched with the paired corner points, executing a step S403; and the number of the first and second groups,
and S403, replacing the positions of the paired angular points with the positions of the unmatched angular points matched with the positions of the unmatched angular points to serve as two vertexes of the parking space entrance line.
According to the parking space detection processing method of at least one embodiment of the present disclosure, the method for obtaining the parking space type of the parking space based on the direction of the parking space entrance line and the direction of the parking space sideline includes:
and calculating the included angle between each parking space side line and the parking space entrance line in the two parking space side lines of the parking space, and acquiring the parking space type of the parking space based on the included angle.
According to this disclosed at least one embodiment's parking stall detection processing method, its characterized in that, based on the direction of parking stall entry line and the direction of parking stall sideline obtains the parking stall type of parking stall, includes:
and acquiring the parking space type of the parking space based on the distance between the two parking space angular points of the two vertexes serving as the parking space entrance line.
According to another aspect of the present disclosure, there is provided a parking space detection processing apparatus, including:
the characteristic information extraction module acquires characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
the angular point supplement module acquires a parking space side line based on the position of the parking space side line and acquires a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
the screening and correcting module is used for screening the parking space side line points for the first time based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, screening the parking space side line points for the second time based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions based on the directions of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update the characteristic information; and the number of the first and second groups,
and the judging module acquires two vertex positions of a parking space entrance line of a parking space based on the parking space corner pairing information so as to acquire the direction of the parking space entrance line, and acquires the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space sideline so as to update the characteristic information.
According to still another aspect of the present disclosure, there is provided a parking space detection processing apparatus including:
the image acquisition and processing device is at least used for acquiring parking space images;
the characteristic information extraction module acquires characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
the angular point supplement module acquires a parking space side line based on the position of the parking space side line and acquires a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
the screening and correcting module is used for screening the parking space side line points for the first time based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, screening the parking space side line points for the second time based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions based on the directions of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update the characteristic information; and the number of the first and second groups,
and the judging module acquires two vertex positions of a parking space entrance line of a parking space based on the parking space corner point pairing information so as to acquire the direction of the parking space entrance line, and acquires the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space side line so as to update the characteristic information.
The parking space detection processing device according to at least one embodiment of the present disclosure further includes an output module that outputs the updated feature information of the parking space or a part of the updated feature information.
According to yet another aspect of the present disclosure, there is provided an electronic device including:
a memory storing execution instructions; and a processor executing execution instructions stored by the memory to cause the processor to perform any of the methods described above.
According to yet another aspect of the present disclosure, there is provided a readable storage medium having stored therein execution instructions for implementing any of the above methods when executed by a processor.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a schematic flow chart of a parking space detection processing method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a training method of a deep neural network according to an embodiment of the present disclosure.
Fig. 3 is a schematic view of a parking space.
Fig. 4 is a schematic view of a parking space corner point information code according to an embodiment of the present disclosure.
Fig. 5 is a schematic view of a parking space sideline information code according to an embodiment of the present disclosure.
Fig. 6 is a schematic view illustrating a position code of a boundary line of a parking space according to an embodiment of the present disclosure.
Fig. 7 is a schematic view of a parking space edge line point visualization according to an embodiment of the present disclosure.
FIG. 8 is a schematic representation of one embodiment of the present disclosure after being subjected to clustering and linear regression processing.
Figure 9 is a schematic view of a missing corner point of one embodiment of the present disclosure.
Fig. 10 is a schematic view of a corner point before rectification according to an embodiment of the disclosure.
Fig. 11 is a schematic view of a corrected parking stall corner point according to an embodiment of the present disclosure.
Fig. 12 is a schematic diagram illustrating a first screening of parking space boundary points by using corner positions and corner directions according to an embodiment of the present disclosure.
Fig. 13 shows a schematic diagram of secondary screening of edge points by using relative positions of edge points of a parking space according to an embodiment of the present disclosure.
Fig. 14 is a schematic diagram of a parking space detection processing result according to an embodiment of the present disclosure.
Fig. 15 is a schematic diagram of a parking space detection processing device implemented by hardware using a processing system according to an embodiment of the present disclosure.
Fig. 16 is a schematic diagram of a parking space detection processing device implemented by hardware using a processing system according to still another embodiment of the present disclosure.
Detailed Description
The present disclosure will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limitations of the present disclosure. It should be further noted that, for the convenience of description, only the portions relevant to the present disclosure are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the illustrated exemplary embodiments/examples are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Accordingly, unless otherwise indicated, features of the various embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concept of the present disclosure.
The use of cross-hatching and/or shading in the drawings is generally used to clarify the boundaries between adjacent components. As such, unless otherwise noted, the presence or absence of cross-hatching or shading does not convey or indicate any preference or requirement for a particular material, material property, size, proportion, commonality between the illustrated components and/or any other characteristic, attribute, property, etc., of a component. Further, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While example embodiments may be practiced differently, the specific process sequence may be performed in a different order than that described. For example, two processes described consecutively may be performed substantially simultaneously or in reverse order to that described. In addition, like reference numerals denote like parts.
When an element is referred to as being "on" or "on," "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element, there are no intervening elements present. For purposes of this disclosure, the term "connected" may refer to physically, electrically, etc., and may or may not have intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising" and variations thereof are used in this specification, the presence of stated features, integers, steps, operations, elements, components and/or groups thereof are stated but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximate terms and not as degree terms, and as such, are used to interpret inherent deviations in measured values, calculated values, and/or provided values that would be recognized by one of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a parking space detection processing method according to an embodiment of the present disclosure.
Referring to fig. 1, the parking space detection processing method S1000 includes:
s100, acquiring characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
s200, acquiring a parking space side line based on the position of the parking space side line, and acquiring a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
s300, carrying out first screening on the parking space side line points based on the updated parking space corner position and the parking space corner direction, removing first abnormal type side line points, carrying out second screening on the parking space side line points based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner direction based on the direction of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update characteristic information; and the number of the first and second groups,
s400, obtaining two vertex positions of a parking space entrance line of a parking space based on the parking space corner pairing information so as to obtain the direction of the parking space entrance line, and obtaining the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space side line so as to update the characteristic information.
Preferably, the characteristic information includes a parking space angle point position, parking space angle point pairing information, a parking space angle point direction, a parking space side line position and a parking space occupation state.
In step S100, the characteristic information of the parking space image of the parking space is extracted.
In this disclosure, the parking space image of the parking space may be a top view. The top view may be a top view after the stitching process.
Through step S200, the missing corner point is acquired. Through the step S300, the parking space angular point direction is corrected based on the screened parking space side line points.
The parking space angular point direction is an included angle between a parking space side line in the parking space image and a reference direction (for example, a horizontal direction, i.e., a horizontal direction in the parking space image).
Wherein, the direction of parking stall sideline can come the sign through the contained angle of the horizontal direction in parking stall sideline and the parking stall image, also can use the contained angle of parking stall sideline and parking stall entry line to come the sign, preferably, uses the contained angle with the horizontal direction of parking stall image to come the sign, and the result is more accurate.
For the parking space detection processing method S1000 of the above embodiment, the characteristic information further includes a parking space occupancy state.
The parking space occupation state comprises occupation and non-occupation.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, the step S100 of obtaining the feature information of at least one parking space based on the parking space image includes:
and extracting the characteristic information of the parking space image of the parking space by using the deep neural network so as to obtain the characteristic information of the parking space.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, the deep neural network is a trained deep neural network.
The characteristic that the parking spaces are different is extracted from the top view rapidly and efficiently, and the basis of parking space detection is provided. In the method for extracting the parking space features in the prior art, part of the method is based on traditional image processing operation, edge information of the parking space or linear features in the image are obtained by utilizing image binarization, Hough transformation and the like, and the method is poor in accuracy and cannot adapt to rapid scene transformation; the other method is based on deep learning feature extraction, partial features of the parking space are detected through a neural network, then the corresponding regions are cut out, and then the cut regions are placed into another neural network to obtain classification and regression results.
Fig. 2 is a flowchart illustrating a deep neural network training method S2000 according to an embodiment of the present disclosure. According to a preferred embodiment of the present disclosure, the deep neural network described above is a deep neural network trained by the following steps:
s2002, preprocessing (gridding) each parking space image in the training data to obtain a parking space tag of each parking space image, where the parking space tag includes a parking space angular point position, parking space angular point pairing information, a parking space angular point direction, a parking space side line position, a parking space occupation state, and a parking space type (e.g., 6);
s2004, carrying out coding processing on the parking space labels of the parking space images; and the number of the first and second groups,
and S2006, training the deep neural network at least based on the parking stall labels of the parking stall images after the coding processing to obtain the trained deep neural network.
Wherein, the quantity of parking stall image is a plurality ofly.
Wherein, the parking stall type can be oblique parking stall, perpendicular parking stall, parallel parking stall.
This embodiment is a training method of a preferred embodiment of the deep neural network.
According to the preferred embodiment of the present disclosure, the images of the respective parking spaces after the encoding processing are input into the deep neural network to obtain the output (predicted value), the loss between the real value and the predicted value is calculated, and when the loss value is smaller than the threshold value (i.e., the threshold value of the loss value), the training is ended.
Fig. 3 is a schematic view of a parking space, and the corresponding parking space tag records the specific position, direction, and occupancy of the parking space.
In the above model training method, preferably, the (multiple) pictures uniformly scaled to a preset size (for example, 512 × 512 size) are input into the deep neural network, feature extraction is performed to obtain feature maps predicted by the deep neural network, the size of the predicted feature maps is equal to the size of the feature map obtained by the encoding processing, then the difference between the predicted feature maps and the feature maps obtained by the encoding processing is performed to calculate a loss value, parameters of the deep neural network model are optimized according to the loss value (a suitable optimizer can be selected for optimization), and when the loss value is reduced to a reasonable threshold or enters a reasonable threshold range, the trained feature extraction network model is obtained.
For the training method of the above embodiment, the encoding process is performed on the parking space tags of each parking space image, and preferably includes:
the thermodynamic diagram based on the center points of the parking space images is used for coding the parking space angular point pairing information, and the thermodynamic diagram based on the parking space angular points of the parking space images is used for coding the positions and the directions of the parking space angular points.
Fig. 4 is a schematic diagram of parking space corner point information coding according to an embodiment of the present disclosure, and preferably, the whole picture is divided into 16 × 16 grids (or other number of grids), where the grid where the parking space center point is located is a center grid, a thermodynamic diagram region with a preset width (for example, 3 × 3) is expanded by taking the center grid as a center, and the corner point thermodynamic diagrams perform the same processing, and the grid where the corner point is located is expanded by taking the center grid as the center.
According to the preferred embodiment of the present disclosure, the present disclosure divides the angular point pairing information code into five layers of characteristic diagrams, and records the position relationship of two angular points forming the parking space relative to the central point; in the first layer characteristic diagram, the central value of the thermodynamic diagram area where the center point of the parking space is located is 1, corresponding numbers are filled in the positions close to the thermodynamic diagram according to two-dimensional Gaussian distribution, and the rest values in the characteristic diagram are 0; the second layer and the third layer record the difference value between the abscissa x and the ordinate y of the center of the grid located at the parking space center point from the parking space angle point 1 (as shown in fig. 4) (i.e. the second layer and the third layer record the difference value between the abscissa of the parking space center point and the abscissa of the angle point 1 and the difference value between the ordinate of the parking space center point and the ordinate of the angle point 1), and the rest values in the characteristic diagram are 0; the difference between the abscissa x and the ordinate y of the grid located at the parking space center point from the parking space angle point 2 (as shown in fig. 4) is recorded in the fourth layer and the fifth layer (i.e., the difference between the abscissa of the parking space center point and the abscissa of the angle point 2 and the difference between the ordinate of the parking space center point and the ordinate of the angle point 2 are recorded in the fourth layer and the fifth layer), and the remaining value in the characteristic diagram is 0.
According to the preferred embodiment of the disclosure, angular point position information codes are divided into three layers of characteristic graphs, and specific position information of the angular points of the parking spaces is recorded; the first layer of feature map records whether the corner point is located in a certain grid, if so, the grid is taken as the center, a thermodynamic map area is expanded, the value of the central area is 1, the rest areas of the thermodynamic map are filled with corresponding numbers according to two-dimensional Gaussian distribution, and the rest values in the feature map are 0; the second layer and the third layer record the difference between the grid center point where the corner point is located and the abscissa x and the ordinate y of the actual position of the corner point (i.e. the second layer and the third layer record the difference between the abscissa of the grid center point where the corner point is located and the abscissa of the actual position of the corner point, and the difference between the ordinate of the grid center point where the corner point is located and the ordinate of the actual position of the corner point), and the remaining value in the feature map is 0.
According to the preferred embodiment of the present disclosure, the angular point direction is encoded into two layers of feature maps, and the angular point direction, i.e. the included angle formed by the parking space side line and the reference direction (preferably the horizontal direction in the parking space image), is recorded, preferably, the first layer of encoded data records the sine value of the included angle, the second layer records the cosine value of the included angle, and the remainder value in the feature map is 0.
For each of the above embodiments, preferably, the encoding process of the parking space tag of each parking space image includes:
and coding the position of the parking space side line based on the center point of the grid passed by the parking space side line of each parking space image relative to the drop foot of the parking space side line.
Preferably, the parking space image is divided into 16 × 16 grids, wherein a schematic diagram in fig. 5 indicates whether a parking space side line passes through the grid, if the parking space side line passes through the grid, the parking space side line is set to 1, otherwise, the parking space side line is set to 0; the schematic diagram b in fig. 5 records the specific positions of the parking space side lines, fig. 6 is a schematic diagram for calculating the position codes of the parking space side lines, if a parking space side line passes through a certain square grid, a vertical line is drawn with the central point of the square grid as a starting point and the parking space side line as a finishing point to obtain a foot drop of the square grid, and two data records of the square grid are the difference between the central point of the square grid and the abscissa x and the ordinate y of the foot drop (i.e., the difference between the abscissa of the central point of the square grid and the abscissa of the foot drop, and the difference between the ordinate of the central point of the square grid and the ordinate of the foot drop).
As for the parking space detection processing method according to each of the above embodiments, preferably, the encoding processing of the parking space tag of each parking space image includes:
and (3) Encoding the parking space occupation state of each parking space image by using One-Hot Encoding (One-Hot Encoding).
For example, the encoding result of the parking space occupation state is a layer of feature map, the feature map records whether the parking space is occupied, if the parking space is occupied, the square inside the parking space of the feature map is set to be 1, and the rest squares of the feature map are set to be 0.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, in S200, acquiring a parking space sideline based on the position of the parking space sideline includes:
clustering the side line points of the side line positions of the parking places to obtain the side line points belonging to the same parking place side line; and performing linear regression on the sideline points belonging to the same sideline to obtain the parking space sideline.
Each parking space edge can be represented by two vertexes p1 and p2, and recorded in the corresponding parking space edge information, and the two vertexes are connected to obtain the parking space edge, as shown in fig. 7. FIG. 8 is a schematic representation after clustering and linear regression processing.
For the parking space detection processing method of each of the above embodiments, preferably, in S200, the obtaining of the missing detection angular point based on the position relationship between the parking space edge line and the parking space angular point includes:
s221, acquiring a first straight line formed by every three parking space angular points for the parking space angular points of all the parking space angular points in the characteristic information;
s222, calculating the distance (L) from each edge vertex of the two edge vertices of each parking space edge to the first straight line1);
S223, if the distance from a vertex of a certain edge of the parking space to the first straight line is less than or equal to the first threshold distance (e.g., less than or equal to 16 pixels), executing step S224, otherwise executing step S225;
s224, taking the vertex of the sideline with the distance of the first straight line less than or equal to the first threshold distance as an undetected angular point, and adding the undetected angular point into the parking space;
s225, calculating the distance (L) between each edge vertex of the parking space edge and each parking space corner2) If the distance (L)2) If the distance is smaller than or equal to the second threshold distance (for example, smaller than or equal to 16 pixels), step S226 is executed, otherwise, the parking space borderline is discarded;
s226, calculating an included angle (for example, calculated by least squares) between a straight line formed by two side line vertexes of the parking space side line and the parking space angular point and a reference direction (preferably, a horizontal direction in the image), and updating the angle of the parking space side line relative to the parking space angular point by using the included angle;
based on the above steps S221 to S226, the missing detection angular point is supplemented, and the corrected parking space angular point information is obtained.
Fig. 9 is a schematic view of a missing detection corner (a corner in a box) according to an embodiment of the present disclosure.
Fig. 10 is a schematic view of a parking space corner point before correction, and fig. 11 is a schematic view of a parking space corner point after correction.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, in S300, the first screening of the parking space edge points is performed based on the updated parking space corner position and the updated parking space corner direction, and the removing of the first abnormal type edge point includes:
and taking the position of the parking space angular point as a center, taking a straight line of the parking space angular point direction as a baseline, respectively rotating the two sides of the baseline by preset angles (for example, 5 degrees), acquiring two solid lines, and taking the side line points positioned on the outer sides of the two solid lines as first abnormal type side line points to be removed.
Fig. 12 shows a schematic diagram of first screening of parking space boundary points by using corner positions and corner directions according to an embodiment of the present disclosure.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, in S300, performing a second screening on the parking space edge points based on the position of the parking space edge point after the first abnormal type edge point is removed, so as to remove the second abnormal type edge point, includes:
and screening the parking space side line points for the second time based on the position dispersion degree of the parking space side line points after the first abnormal type side line points are removed so as to remove the second abnormal type side line points.
Because the parking stall scene is complicated changeable, the parking stall sideline is fuzzy, and the parking stall sideline part shelters from etc. and often leads to parking stall sideline point to return and appear the error, is used for this part parking stall sideline point in parking stall discernment can lead to the result to appear the error. In actual statistics, the distribution of the parking space side line points in the clear area is in linear distribution, and the parking space side line points in the fuzzy area and the shielding area are concentrated, referring to fig. 13.
Fig. 13 shows a schematic diagram of secondary screening of edge points by using relative positions of edge points of a parking space according to an embodiment of the present disclosure.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, in S300, performing a second screening on the parking space edge points based on the position of the parking space edge point after the first abnormal type edge point is removed, so as to remove the second abnormal type edge point, includes:
and respectively calculating the number of the parking space edge points contained in a circular area which takes each parking space edge point as a center and has a preset radius (for example, 10 pixels), if the number exceeds a threshold number (for example, 3), setting the circular area as a fuzzy area, and taking the edge points in the fuzzy area as second abnormal type edge points and removing the second abnormal type edge points.
For the parking space detection processing method S1000 according to each of the above embodiments, preferably, in S400, obtaining two vertex positions of a parking space entrance line of a parking space based on the parking space corner pairing information to obtain a direction of the parking space entrance line includes:
s401, calculating the distance (L) between each paired angular point and each unpaired angular point (namely local angular point) of the parking space3);
S402, if the distance between a paired corner point and an unpaired corner point is smaller than a third threshold distance (for example, 32 pixels), judging that the paired corner point and the unpaired corner point are matched with each other, and if two corner points of a certain group of paired corner points have unmatched corner points (local corner points) matched with the paired corner points, executing a step S403; otherwise, discarding the paired angular points without the unmatched angular points matched with the paired angular points, namely if one paired angular point does not have the unmatched angular points matched with the paired angular point, not taking the paired angular point as the vertex of the parking space entrance line; and the number of the first and second groups,
and S403, replacing the positions of the paired angular points with the positions of the unmatched angular points matched with the angular points as two vertexes of the parking space entrance line.
The paired angular points are parking space angular points with matching information and are output by a neural network; the unpaired angular points are local angular points which are subjected to supplementary missing detection, do not have matching information, and have accurate position information.
According to this disclosed preferred embodiment, in S400, obtain the parking stall type of parking stall based on the direction of parking stall entry line and the direction of parking stall sideline, include:
calculate every parking stall sideline and the contained angle of parking stall entry line in two parking stall sidelines of parking stall, acquire the parking stall type of parking stall based on the contained angle.
Preferably, if the degree of the included angle is within a preset angle range (for example, 80 to 100 degrees), the parking space is determined as a vertical parking space or a parallel parking space, that is, a plurality of parking spaces are vertically arranged or arranged in parallel, and if the degree of the included angle is not within the preset angle range, the parking space is determined as an oblique parking space.
Preferably, in S400, the parking space type of the parking space is obtained based on the direction of the parking space entrance line and the direction of the parking space sideline, including:
the parking space type of the parking space is obtained based on the distance between the two parking space angular points of the two vertexes serving as the parking space entrance line.
Preferably, the distance between two corner points of the parking space, which are two vertices of the parking space entrance line, is calculated, and if the distance is within a preset distance range (for example, between 105 and 196 pixels), the parking space is a vertical parking space (the parking space entrance line is long), otherwise, the parking space is a parallel parking space (the parking space entrance line is short).
The utility model also provides a parking stall detection processing apparatus.
According to an embodiment of the present disclosure, the parking space detection processing apparatus 1000 includes:
the characteristic information extraction module 1002 (i.e., the characteristic extraction network model) obtains characteristic information of at least one parking space based on a parking space image, where the characteristic information at least includes a parking space corner position, parking space corner pairing information, a parking space corner direction, and a parking space side line position;
the angular point supplementing module 1004, wherein the angular point supplementing module 1004 acquires a parking space side line based on the position of the parking space side line, and acquires a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
a screening and correcting module 1006, wherein the screening and correcting module 1006 performs a first screening on the parking space edge points based on the updated parking space corner position and the parking space corner direction, removes the first abnormal type edge point, performs a second screening on the parking space edge points based on the position of the parking space edge points after the first abnormal type edge point is removed, so as to remove the second abnormal type edge point, and corrects the parking space corner direction based on the direction of each parking space edge line from which the first abnormal type edge point and the second abnormal type edge point are removed, so as to update the characteristic information; and the number of the first and second groups,
the determining module 1008, the determining module 1008 obtains two vertex positions of the parking space entrance line of the parking space based on the parking space corner pairing information to obtain a direction of the parking space entrance line, and obtains a parking space type of the parking space based on the direction of the parking space entrance line and the direction of the parking space side line to update the characteristic information.
The parking space detection processing device 1000 can be implemented by a software architecture.
Fig. 15 shows a schematic diagram of a parking space detection processing device 1000 using a hardware implementation of the processing system.
The apparatus may include corresponding means for performing each or several of the steps of the flowcharts described above. Thus, each step or several steps in the above-described flow charts may be performed by a respective module, and the apparatus may comprise one or more of these modules. The modules may be one or more hardware modules specifically configured to perform the respective steps, or implemented by a processor configured to perform the respective steps, or stored within a computer-readable medium for implementation by a processor, or by some combination.
Referring to fig. 15, the hardware architecture may be implemented with a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. The bus 1100 couples various circuits including the one or more processors 1200, the memory 1300, and/or the hardware modules together. The bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
The bus 1100 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (Extended Industry Standard Component) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one connection line is shown, but no single bus or type of bus is shown.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the implementations of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied in a machine-readable medium, such as a memory. In some embodiments, some or all of the software program may be loaded and/or installed via memory and/or a communication interface. When the software program is loaded into memory and executed by a processor, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above by any other suitable means (e.g., by means of firmware).
The logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in the memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps of the method implementing the above embodiments may be implemented by hardware that is instructed to be associated with a program, which may be stored in a readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The parking space detection processing device 1000 according to still another embodiment of the present disclosure includes:
the image acquisition and processing device is at least used for acquiring parking space images;
the characteristic information extraction module 1002, the characteristic information extraction module 1002 obtaining characteristic information of at least one parking space based on the parking space image, the characteristic information at least including a parking space angular point position, parking space angular point pairing information, a parking space angular point direction, and a parking space side line position;
the angular point supplementing module 1004, wherein the angular point supplementing module 1004 acquires a parking space side line based on the position of the parking space side line, and acquires a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
a screening and correcting module 1006, wherein the screening and correcting module 1006 performs a first screening on the parking space edge points based on the updated parking space corner position and the parking space corner direction, removes the first abnormal type edge point, performs a second screening on the parking space edge points based on the position of the parking space edge points after the first abnormal type edge point is removed, so as to remove the second abnormal type edge point, and corrects the parking space corner direction based on the direction of each parking space edge line from which the first abnormal type edge point and the second abnormal type edge point are removed, so as to update the characteristic information; and the number of the first and second groups,
the determining module 1008, the determining module 1008 obtains two vertex positions of the parking space entrance line of the parking space based on the parking space corner pairing information to obtain a direction of the parking space entrance line, and obtains a parking space type of the parking space based on the direction of the parking space entrance line and the direction of the parking space side line to update the characteristic information.
The parking space detection processing device 1000 according to the present embodiment may further include an image capture processing device (e.g., a fish-eye camera) in addition to the parking space detection processing devices according to the above-described embodiments.
The parking space detection processing device 1000 according to each of the above embodiments preferably further includes an output module 1010, with reference to fig. 16, and the output module 1010 outputs the feature information after the update of the parking space or outputs a part of the feature information after the update.
According to a preferred embodiment of the present disclosure, the output module 1010 outputs the updated characteristic information of the parking space in the form of an image or outputs a part of the updated characteristic information.
For example, the output module 1010 outputs the position of the parking space corner point, the position of the parking space side line, the parking space occupation state and the parking space type in the feature information.
Fig. 14 is a schematic view of a parking space detection processing result according to an embodiment of the present disclosure, which may be output through the output module 1010.
The disclosed parking space detection processing method/device is especially suitable for parking space detection based on top view obtained by splicing fisheye cameras, a neural network is trained through the coding mode of the present disclosure, so that the trained neural network can directly output parking space characteristic information (whether a parking space occupies, parking space angular point information, parking space side line information and the like), missing detection rate of shielding angular points and false detection rate of parking space side line points are reduced through mutual supplement and correction of the parking space angular points and the side line points, the accuracy of parking space direction is improved through judgment and weighted fitting methods of clear parking space side line areas, the parking space type is judged through the parking space entrance line length and the angle between the parking space side line and the entrance line, the accuracy of judgment of the parking space type is improved, accurate detection of different scenes and different types of parking spaces is realized, a set of parking space detection with high accuracy and strong robustness is formed, a parking space detection processing method capable of detecting in real time.
Especially, aiming at the problem that the parking space information extraction process in the prior art is redundant and the error of the parking space information is large due to the step-by-step information extraction, the deep neural network is trained based on the parking space information coding method in the parking space detection processing method/device disclosed by the invention, so that the deep neural network directly outputs whether the parking space is occupied, the angular point information (angular point pairing information, angular point position and angular point direction) and the parking space sideline information (sideline point position), and the specific information of the position, direction and type of the parking space and whether the parking space is occupied is directly given through the subsequent processing steps, the information extraction steps are reduced, and the efficiency of parking space detection is improved.
Further, aiming at the problem that the parking space angular points are blocked to cause fuzzy parking spaces and angular points and missing detection of the parking spaces, the missing detection parking space angular point supplement method in the parking space detection processing method/device disclosed by the invention supplements the missing detection and the blocking angular points by utilizing the relative position relation of the parking space side lines and the parking space angular points, so that the accuracy and the recall rate of parking space detection are improved.
Further, aiming at the problem that the regression of the parking space sideline direction is inaccurate due to the fact that the picture parking space sideline is fuzzy, incomplete, distorted and the like, through the method for correcting the parking space sideline direction in the parking space detection processing method/device, abnormal points in the parking space sideline points are removed through twice screening, and then the parking space angle point angle is corrected by the parking space sideline points with higher confidence degrees after screening, so that the accuracy of the regression of the parking space detection angle is improved.
Further, aiming at the problem of misjudgment of the parking space type, the parking space type is judged according to the angle between the parking space inlet line and the parking space side line and the length of the parking space side line by the parking space type judgment method in the parking space detection processing method/device, the parking space type can be judged only through the angular point position and the direction, the steps of extracting parking space information are reduced, and the efficiency of an algorithm and the accuracy of outputting the parking space type are improved.
The parking space detection processing method/device disclosed by the invention can adapt to most driving scenes, is applied to an automatic driving system, and can provide accurate parking space information for the automatic driving system.
The present disclosure also provides an electronic device, including: a memory storing execution instructions; and the processor or other hardware modules execute the execution instructions stored in the memory, so that the processor or other hardware modules execute the parking space detection processing method.
The disclosure also provides a readable storage medium, wherein the readable storage medium stores an execution instruction, and the execution instruction is used for realizing the parking space detection processing method when being executed by the processor.
In the description herein, reference to the description of the terms "one embodiment/implementation," "some embodiments/implementations," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/implementation or example is included in at least one embodiment/implementation or example of the present application. In this specification, the schematic representations of the terms described above are not necessarily the same embodiment/mode or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/aspects or examples and features of the various embodiments/aspects or examples described in this specification can be combined and combined by one skilled in the art without conflicting therewith.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
It will be understood by those skilled in the art that the foregoing embodiments are merely for clarity of illustration of the disclosure and are not intended to limit the scope of the disclosure. Other variations or modifications may occur to those skilled in the art, based on the foregoing disclosure, and are still within the scope of the present disclosure.

Claims (10)

1. A parking space detection processing method is characterized by comprising the following steps:
s100, acquiring characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
s200, acquiring a parking space side line based on the position of the parking space side line, and acquiring a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
s300, carrying out first screening on the parking space side line points based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, carrying out second screening on the parking space side line points based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions based on the directions of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update the characteristic information; and
s400, acquiring two vertex positions of a parking space entrance line of a parking space based on the parking space corner point pairing information to acquire the direction of the parking space entrance line, and acquiring the type of the parking space based on the direction of the parking space entrance line and the direction of a parking space side line to update the characteristic information;
wherein, carry out screening for the first time to parking stall sideline point based on parking stall angular point position and parking stall angular point direction after the update, get rid of first abnormal type sideline point, include: taking the position of the parking space angular point as a center, taking a straight line of the parking space angular point direction as a baseline, respectively rotating the straight line by preset angles towards two sides of the baseline to obtain two solid lines, and taking side line points positioned on the outer sides of the two solid lines as first abnormal type side line points to be removed;
wherein, the position based on the parking stall sideline point after removing first unusual type sideline point screens the parking stall sideline point for the second time to get rid of second unusual type sideline point, include: and screening the parking space side line points for the second time based on the position dispersion degree of the parking space side line points after the first abnormal type side line points are removed so as to remove the second abnormal type side line points.
2. The parking space detection processing method according to claim 1, wherein the characteristic information further includes a parking space occupancy state.
3. The parking space detection processing method according to claim 1 or 2, wherein the obtaining of the characteristic information of at least one parking space based on the parking space image includes:
and extracting the characteristic information of the parking space image of the parking space by using a deep neural network so as to obtain the characteristic information of the parking space.
4. The parking space detection processing method according to claim 3, wherein the deep neural network is a trained deep neural network.
5. The parking space detection processing method according to claim 4, wherein the deep neural network is trained by the following steps:
preprocessing each parking space image in the training data to obtain a parking space label of each parking space image, wherein the parking space label comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction, a parking space side line position, a parking space occupation state and a parking space type;
carrying out coding processing on the parking space labels of the parking space images; and
and training the deep neural network at least based on the parking stall labels of the parking stall images after the coding processing so as to obtain the trained deep neural network.
6. The parking space detection processing method according to claim 5, wherein the encoding processing of the parking space tags of the respective parking space images includes:
and the thermodynamic diagrams based on the parking space angular points of the parking space images encode the parking space angular point pairing information, and encode the parking space angular point positions and the parking space angular point directions.
7. The utility model provides a parking stall detection processing apparatus which characterized in that includes:
the characteristic information extraction module acquires characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
the angular point supplement module acquires a parking space side line based on the position of the parking space side line and acquires a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
the screening and correcting module is used for screening the parking space side line points for the first time based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, screening the parking space side line points for the second time based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions based on the directions of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update the characteristic information; and
the judgment module acquires two vertex positions of a parking space entrance line of a parking space based on the parking space angular point pairing information so as to acquire the direction of the parking space entrance line, and acquires the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space side line so as to update the characteristic information;
wherein, carry out screening for the first time to parking stall sideline point based on parking stall angle point position after the update and parking stall angle point direction, get rid of first unusual type sideline point, include: taking the position of the parking space angular point as a center, taking a straight line of the parking space angular point direction as a baseline, respectively rotating the straight line by preset angles towards two sides of the baseline to obtain two solid lines, and taking side line points positioned on the outer sides of the two solid lines as first abnormal type side line points to be removed;
wherein, the position based on the parking stall sideline point after removing first unusual type sideline point screens the parking stall sideline point for the second time to get rid of second unusual type sideline point, include: and screening the parking space side line points for the second time based on the position dispersion degree of the parking space side line points after the first abnormal type side line points are removed so as to remove the second abnormal type side line points.
8. The utility model provides a parking stall detection processing apparatus which characterized in that includes:
the image acquisition and processing device is at least used for acquiring parking space images;
the characteristic information extraction module acquires characteristic information of at least one parking space based on a parking space image, wherein the characteristic information at least comprises a parking space angular point position, parking space angular point pairing information, a parking space angular point direction and a parking space side line position;
the angular point supplement module acquires a parking space side line based on the position of the parking space side line and acquires a missing detection angular point based on the position relation between the parking space side line and the parking space angular point so as to update the characteristic information;
the screening and correcting module is used for screening the parking space side line points for the first time based on the updated parking space corner point positions and the parking space corner point directions, removing first abnormal type side line points, screening the parking space side line points for the second time based on the positions of the parking space side line points after the first abnormal type side line points are removed so as to remove second abnormal type side line points, and correcting the parking space corner point directions based on the directions of each parking space side line from which the first abnormal type side line points and the second abnormal type side line points are removed so as to update the characteristic information; and
the judgment module acquires two vertex positions of a parking space entrance line of a parking space based on the parking space angular point pairing information so as to acquire the direction of the parking space entrance line, and acquires the parking space type of the parking space based on the direction of the parking space entrance line and the direction of a parking space side line so as to update the characteristic information; wherein, carry out screening for the first time to parking stall sideline point based on parking stall angle point position after the update and parking stall angle point direction, get rid of first unusual type sideline point, include: taking the position of the parking space angular point as a center, taking a straight line of the parking space angular point direction as a baseline, respectively rotating the straight line by preset angles towards two sides of the baseline to obtain two solid lines, and taking side line points positioned on the outer sides of the two solid lines as first abnormal type side line points to be removed;
wherein, the position based on the parking stall sideline point after removing first unusual type sideline point screens the parking stall sideline point for the second time to get rid of second unusual type sideline point, include: and screening the parking space side line points for the second time based on the position dispersion degree of the parking space side line points after the first abnormal type side line points are removed so as to remove the second abnormal type side line points.
9. An electronic device, comprising:
a memory storing execution instructions; and
a processor that executes execution instructions stored by the memory to cause the processor to perform the method of any of claims 1-6.
10. A readable storage medium having stored therein execution instructions, which when executed by a processor, are configured to implement the method of any one of claims 1 to 6.
CN202110929421.2A 2021-08-13 2021-08-13 Parking space detection processing method and device, electronic equipment and storage medium Active CN113822156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110929421.2A CN113822156B (en) 2021-08-13 2021-08-13 Parking space detection processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110929421.2A CN113822156B (en) 2021-08-13 2021-08-13 Parking space detection processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113822156A CN113822156A (en) 2021-12-21
CN113822156B true CN113822156B (en) 2022-05-24

Family

ID=78922750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110929421.2A Active CN113822156B (en) 2021-08-13 2021-08-13 Parking space detection processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113822156B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114220188B (en) * 2021-12-27 2024-07-23 上海高德威智能交通系统有限公司 Parking space inspection method, device and equipment
CN114419924B (en) * 2022-03-28 2022-06-14 穗企通科技(广州)有限责任公司 AI application control management system based on wisdom city
CN114954479A (en) * 2022-06-01 2022-08-30 安徽蔚来智驾科技有限公司 Parking space entrance line determination method, computer equipment, storage medium and vehicle
CN115206130B (en) * 2022-07-12 2023-07-18 合众新能源汽车股份有限公司 Parking space detection method, system, terminal and storage medium
DE102022208405A1 (en) * 2022-08-12 2024-02-15 Continental Autonomous Mobility Germany GmbH Method for determining a parking space and a target position of a vehicle in the parking space

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063632A (en) * 2018-07-27 2018-12-21 重庆大学 A kind of parking position Feature Selection method based on binocular vision
CN109583392A (en) * 2018-12-05 2019-04-05 北京纵目安驰智能科技有限公司 A kind of method for detecting parking stalls, device and storage medium
CN109712427A (en) * 2019-01-03 2019-05-03 广州小鹏汽车科技有限公司 A kind of method for detecting parking stalls and device
WO2020019930A1 (en) * 2018-07-25 2020-01-30 广州小鹏汽车科技有限公司 Automatic parking method and device
CN112348817A (en) * 2021-01-08 2021-02-09 深圳佑驾创新科技有限公司 Parking space identification method and device, vehicle-mounted terminal and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109435942B (en) * 2018-10-31 2024-04-09 合肥工业大学 Information fusion-based parking space line and parking space recognition method and device
CN112016349B (en) * 2019-05-29 2024-06-11 北京市商汤科技开发有限公司 Parking space detection method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020019930A1 (en) * 2018-07-25 2020-01-30 广州小鹏汽车科技有限公司 Automatic parking method and device
CN109063632A (en) * 2018-07-27 2018-12-21 重庆大学 A kind of parking position Feature Selection method based on binocular vision
CN109583392A (en) * 2018-12-05 2019-04-05 北京纵目安驰智能科技有限公司 A kind of method for detecting parking stalls, device and storage medium
CN109712427A (en) * 2019-01-03 2019-05-03 广州小鹏汽车科技有限公司 A kind of method for detecting parking stalls and device
CN112348817A (en) * 2021-01-08 2021-02-09 深圳佑驾创新科技有限公司 Parking space identification method and device, vehicle-mounted terminal and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Car Detection Based Algorithm For Automatic Parking Space Detection;Raj Patel 等;《2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)》;20210223;第1418-1423页 *
基于环视视觉的自动泊车系统;吕雪杰;《中国优秀硕士学位论文全文数据库工程科技II辑》;20210315;第C035-173页 *

Also Published As

Publication number Publication date
CN113822156A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113822156B (en) Parking space detection processing method and device, electronic equipment and storage medium
CN111178236B (en) Parking space detection method based on deep learning
CN110088766B (en) Lane line recognition method, lane line recognition device, and nonvolatile storage medium
CN113705474A (en) Parking space detection method and device
CN107644538B (en) Traffic signal lamp identification method and device
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
CN115272477A (en) Checkerboard coding corner detection algorithm applied to panoramic image splicing
CN112036385B (en) Library position correction method and device, electronic equipment and readable storage medium
US20230394829A1 (en) Methods, systems, and computer-readable storage mediums for detecting a state of a signal light
CN112597846A (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN110929661A (en) Parking space detection method and system capable of parking based on convolutional neural network
CN116824347A (en) Road crack detection method based on deep learning
CN107301388A (en) A kind of automatic vehicle identification method and device
CN112733855B (en) Table structuring method, table recovering device and device with storage function
CN114820679A (en) Image annotation method and device, electronic equipment and storage medium
CN114897683A (en) Method, device and system for acquiring vehicle-side image and computer equipment
CN113902740A (en) Construction method of image blurring degree evaluation model
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN113326749A (en) Target detection method and device, storage medium and electronic equipment
CN116091933A (en) Geological analysis method and device for water area topography based on remote sensing technology
CN113468991B (en) Parking space detection method based on panoramic video
CN111428538A (en) Lane line extraction method, device and equipment
CN116228535A (en) Image processing method and device, electronic equipment and vehicle
CN115830049A (en) Corner point detection method and device
CN115775245A (en) Coil winding detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant