CN114155740A - Parking space detection method, device and equipment - Google Patents

Parking space detection method, device and equipment Download PDF

Info

Publication number
CN114155740A
CN114155740A CN202111566301.7A CN202111566301A CN114155740A CN 114155740 A CN114155740 A CN 114155740A CN 202111566301 A CN202111566301 A CN 202111566301A CN 114155740 A CN114155740 A CN 114155740A
Authority
CN
China
Prior art keywords
parking space
initial
corner
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111566301.7A
Other languages
Chinese (zh)
Inventor
薛宜明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN202111566301.7A priority Critical patent/CN114155740A/en
Publication of CN114155740A publication Critical patent/CN114155740A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

The application provides a parking space detection method, a parking space detection device and parking space detection equipment, wherein the method comprises the following steps: determining corner features of a plurality of initial corners based on a scene image sequence, wherein the corner features comprise positions, shapes and extension directions, and the shapes are T-shaped or L-shaped; determining a lane line target direction and a parking space line target direction based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point, and selecting a target angular point from a plurality of initial angular points; aiming at each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking space line, selecting the initial angular point as a target angular point; and determining a target parking space corresponding to the parking space area based on the angular point characteristics of the target angular point. Through the technical scheme of this application, can realize parking stall automated inspection, under the roadside parking scene, even shelter from the parking stall seriously, also can realize parking stall automated inspection.

Description

Parking space detection method, device and equipment
Technical Field
The application relates to the technical field of intelligent traffic, in particular to a parking space detection method, a parking space detection device and parking space detection equipment.
Background
With the continuous development of the human society, cities will bear more and more population in the future, and in order to realize the sustainable development of the cities, the comprehensive competitiveness of the cities is improved, and the construction of smart cities is imperative. The intelligent city is based on the idea that the production and living states of people are managed in a more detailed and flexible mode by applying a new-generation information technology, the sensors are embedded or equipped in various facilities such as a power supply system, a water supply system and a traffic system, the formed internet of things is related to the internet, the integration of human society and a physical system is realized, and then the internet of things is integrated through a computer, cloud computing and the like, so that the intelligent city can be realized.
Intelligent traffic is an important component of a smart city, and the essence of intelligent traffic lies in vehicle management, such as management of vehicles in motion, management of vehicles in parking, and the like. For the management of the running vehicle, the image of the vehicle can be collected through the camera, the vehicle behavior is analyzed based on the image, and the vehicle management is realized. For the management of vehicles in parking, such as the management of vehicles in parking spaces, it is possible to manually detect when a vehicle enters a parking space, when a vehicle leaves a parking space, and then to charge the vehicle. However, in the above-described system, the participation of the operator is required, and the vehicle cannot be charged when the operator is not present.
In order to solve the problems, a camera can be deployed in a parking space area, images of the parking space area are collected through the camera, and when a vehicle enters the parking space and when the vehicle leaves the parking space are analyzed based on the images, so that the vehicle is charged. In order to implement the above-described functions, an important prerequisite is that all the slots of the parking space area are detected, and that when a vehicle enters a parking space and when the vehicle leaves the parking space are analyzed on the basis of these slots.
However, when the vehicle enters or leaves the parking space, the vehicle may block the parking space, which may result in that all parking spaces in the parking space region cannot be detected based on the image collected by the camera, and if there may be situations such as wrong parking space detection and missed parking space detection, it may not be possible to analyze when the vehicle enters or leaves the parking space.
Disclosure of Invention
The application provides a parking space detection method, the method includes:
determining corner point features of a plurality of initial corner points of a parking space area based on a scene image sequence corresponding to the parking space area, wherein the corner point features comprise positions, shapes and extension directions, and the shapes are T-shaped or L-shaped;
determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point, and selecting a target angular point from the plurality of initial angular points; for each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking line, selecting the initial angular point as a target angular point;
and determining a target parking space corresponding to the parking space area based on the corner characteristic of the target corner.
The application provides a parking stall detection device, the device includes:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining corner point characteristics of a plurality of initial corner points of a parking space area based on a scene image sequence corresponding to the parking space area, the corner point characteristics comprise positions, shapes and extending directions, and the shapes are T-shaped or L-shaped; determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point;
a selecting module, configured to select a target corner from the multiple initial corners; for each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking line, selecting the initial angular point as a target angular point; the determining module is further configured to determine a target parking space corresponding to the parking space area based on the corner feature of the target corner.
The application provides a parking stall check out test set includes: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the parking space detection method disclosed in the above example of the present application.
According to the technical scheme, in the embodiment of the application, the corner characteristic of a plurality of initial corners can be determined based on the scene image sequence corresponding to the parking space region, the lane line target direction and the parking space line target direction are determined based on the extending direction of the T-shaped corners or the extending direction of the L-shaped corners, then the initial corners are screened through the lane line target direction and the parking space line target direction, accurate and reliable target corners are obtained, when the target parking space is determined based on the corner characteristic of the target corners, accurate parking space conditions are obtained, although the parking space can be shielded by a vehicle, all parking spaces in the parking space region can be detected, the situations of parking space false detection, parking space omission and the like are reduced, and when the vehicle enters the parking space and when the vehicle leaves the parking space can be analyzed based on the parking space. In the above mode, parking stall automated inspection can be realized, under the roadside parking scene, even if the parking stall is seriously sheltered, parking stall automated inspection also can be realized, the automatic configuration of parking stall line is realized, and the manual configuration of parking stall line is not needed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
Fig. 1 is a schematic flow chart of a parking space detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a parking space detection method according to an embodiment of the present application;
FIGS. 3A and 3B are schematic views of lane lines and lane lines in one embodiment of the present application;
FIG. 3C is a schematic diagram of corner shapes and corner extension directions in one embodiment of the present application;
FIG. 4 is a schematic diagram of a corner detection model in an embodiment of the present application;
FIG. 5 is a schematic illustration of determining corner point features in one embodiment of the present application;
FIG. 6 is a schematic illustration of determining lane line and lane line directions in one embodiment of the present application;
FIGS. 7A-7C are schematic views of a parking space in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a parking space detection device according to an embodiment of the present application;
fig. 9 is a hardware configuration diagram of a parking space detection device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The embodiment of the present application provides a parking space detection method, which is shown in fig. 1 and may include:
step 101, determining corner features of a plurality of initial corners of a parking space area based on a scene image sequence corresponding to the parking space area. For example, for each initial corner point, the corner point features of the initial corner point may include, but are not limited to: the position, shape and extending direction of the corner point are T-shaped or L-shaped, the corner point with the T-shaped shape can be called as a T-shaped corner point, and the corner point with the L-shaped shape can be called as an L-shaped corner point.
For example, determining the corner features of the initial corners of the parking space area based on the scene image sequence corresponding to the parking space area may include, but is not limited to: if the scene image sequence comprises M frames of scene images (M can be a positive integer, namely at least one frame of scene image), inputting the scene images to a corner detection model aiming at each frame of scene images to obtain corner features of predicted corners and confidence degrees corresponding to the corner features; determining the corner features of a plurality of initial corners of a parking space area based on the corner features of the prediction corners corresponding to the M frames of scene images; for each predicted corner, if the predicted corner corresponds to at least N frames of scene images, where N is less than or equal to M, the predicted corner may be determined as an initial corner, and a corner feature with a maximum confidence corresponding to the predicted corner is determined as a corner feature of the initial corner.
In a possible implementation manner, determining corner features of a plurality of initial corners of a parking space region based on a scene image sequence corresponding to the parking space region may include: when a parking space detection command for a parking space area is received, determining corner point characteristics of a plurality of initial corner points of the parking space area based on a scene image sequence corresponding to the parking space area, namely executing the step 101 and the subsequent steps; or when a vehicle entering the parking space area is detected, determining the angular point characteristics of a plurality of initial angular points of the parking space area based on the scene image sequence corresponding to the parking space area; or when the fact that the vehicle leaves the parking space area is detected, determining corner point features of a plurality of initial corner points of the parking space area based on a scene image sequence corresponding to the parking space area.
And 102, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point.
For example, determining the lane line target direction and the parking space line target direction corresponding to the parking space area based on the extending direction of the T-shaped angular point or the extending direction of the L-shaped angular point may include, but is not limited to: and if the number of the T-shaped initial angular points is more than or equal to 1 and the number of the T-shaped initial angular points is more than or equal to half of the number of the L-shaped initial angular points, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular points. Or if the number of the T-shaped initial angular points is less than 1 or the number of the T-shaped initial angular points is less than half of the number of the L-shaped initial angular points, and the number of the L-shaped initial angular points is greater than or equal to 2, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the L-shaped angular points.
Step 103, selecting a target angular point from the plurality of initial angular points, for example, selecting a target angular point from the plurality of initial angular points based on a lane line target direction and a parking space line target direction. Illustratively, for each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking space line, the initial angular point is selected as the target angular point, otherwise (the extending direction corresponding to the initial angular point is not matched with the target direction of the lane line and/or the extending direction corresponding to the initial angular point is not matched with the target direction of the parking space line), the initial angular point is forbidden to be selected as the target angular point.
And step 104, determining a target parking space corresponding to the parking space area based on the corner characteristic of the target corner.
Exemplarily, the target parking space corresponding to the parking space region is determined based on the corner feature of the target corner, which includes but is not limited to: determining K initial parking spaces corresponding to the parking space areas based on the angular point characteristics of the target angular points, wherein K is a positive integer; for each initial parking space, then: if the initial parking space is determined to be a normal parking space, determining the initial parking space as a target parking space; if the initial parking space is determined to be the false detection parking space, deleting the initial parking space; and if the missed parking space exists in the parking space area is determined based on the initial parking space, generating at least two target parking spaces based on the initial parking space. And determining target parking spaces corresponding to the parking space areas based on all the target parking spaces corresponding to the K initial parking spaces, namely, taking all the target parking spaces corresponding to the K initial parking spaces as the target parking spaces corresponding to the parking space areas.
For example, for each initial parking space, if the initial parking space includes two adjacent L-shaped angular points, and it is determined that the lane line directions of the two L-shaped angular points are opposite based on the extending directions of the two L-shaped angular points, it is determined that the initial parking space is a false detection parking space. And if the lane line length of the initial parking space is smaller than the product of the lane line length of the adjacent parking space and the first coefficient value, determining that the initial parking space is the false detection parking space, wherein the first coefficient value is larger than 0 and smaller than 1, and if the first coefficient value is 0.5. And if the length of the lane line of the initial parking space is greater than the product of the length of the lane line of the adjacent parking space and a second coefficient value, determining that the missed parking space exists in the parking space area, wherein the second coefficient value is greater than 1, and if the second coefficient value is 2. If K is 1, only one initial parking space is determined, and the length of the lane line of the initial parking space is greater than the product of the height of the scene image and a third coefficient value, it is determined that a missed parking space exists in the parking space area, the third coefficient value is greater than or equal to 0.5, and the third coefficient value is less than 1, and the third coefficient value can be 0.5, for example. If K is 1, that is, only one initial parking space is determined, and the lane line length of the initial parking space is smaller than the product of the scene image height and the fourth coefficient value, and the upper side or lower side height of the initial parking space in the scene image is greater than the lane line length of the initial parking space, it is determined that there is a missed parking space in the parking space area, and the fourth coefficient value is less than or equal to 0.5, and the fourth coefficient value is greater than 0, for example, the fourth coefficient value may be 0.5.
For each target parking space, the length of the parking space line corresponding to the target parking space may be determined in the following manner: if the vehicle exists in the target parking space and the width of the vehicle is obtained, determining the length of a parking space line corresponding to the target parking space based on the width of the vehicle and the configured first proportional relation; wherein, the first proportional relation represents the proportional relation between the vehicle width and the length of the parking space line. Or if the target parking space has no vehicle, or the target parking space has a vehicle but the width of the vehicle is not known, determining the length of the parking space line corresponding to the target parking space based on the lane line length of the target parking space and the configured second proportional relation; wherein, the second proportional relation represents the proportional relation between the length of the lane line and the length of the parking space line.
According to the technical scheme, in the embodiment of the application, the corner characteristic of a plurality of initial corners can be determined based on the scene image sequence corresponding to the parking space region, the lane line target direction and the parking space line target direction are determined based on the extending direction of the T-shaped corners or the extending direction of the L-shaped corners, then the initial corners are screened through the lane line target direction and the parking space line target direction, accurate and reliable target corners are obtained, when the target parking space is determined based on the corner characteristic of the target corners, accurate parking space conditions are obtained, although the parking space can be shielded by a vehicle, all parking spaces in the parking space region can be detected, the situations of parking space false detection, parking space omission and the like are reduced, and when the vehicle enters the parking space and when the vehicle leaves the parking space can be analyzed based on the parking space. In the above mode, parking stall automated inspection can be realized, under the roadside parking scene, even if the parking stall is seriously sheltered, parking stall automated inspection also can be realized, the automatic configuration of parking stall line is realized, and the manual configuration of parking stall line is not needed.
The above technical solution of the embodiment of the present application is described below with reference to specific application scenarios.
The roadside parking management system can comprise front-end equipment (such as a camera and the like) and management equipment, the front-end equipment can collect scene images corresponding to parking space areas (images corresponding to the parking space areas are recorded as the scene images in the text), the scene images corresponding to the parking space areas are sent to the management equipment, automatic analysis is carried out on the scene images corresponding to the parking space areas by the management equipment, and vehicle in-and-out detection, license plate recognition, vehicle query and the like are achieved. In order to support the automatic analysis function, the parking space detection is further required, that is, all parking spaces corresponding to the parking space area are detected (for convenience of description, the parking spaces are referred to as parking spaces herein).
In order to detect out all the parking spaces corresponding to the parking space areas, in a possible implementation mode, all the parking spaces corresponding to the parking space areas can be configured manually by workers, in the mode, when the parking spaces change, all the parking spaces corresponding to the parking space areas need to be reconfigured, the parking spaces change every time, the labor cost is high, and the user experience is poor. In another possible implementation manner, all parking spaces corresponding to the parking space regions can be detected through the scene images, in this manner, when a vehicle enters or leaves a parking space, the vehicle may block the parking space, so that all parking spaces corresponding to the parking space regions cannot be accurately detected based on the scene images, and if there may be situations of false detection of the parking spaces, missed detection of the parking spaces, and the like.
In view of the above discovery, the embodiment of the application provides an automatic parking space detection method, which can realize automatic parking space detection based on scene images, can realize automatic parking space detection even if a parking space is seriously shielded under a roadside parking scene, realizes automatic parking space configuration, does not need to manually configure the parking space, and realizes automatic parking space updating.
The embodiment of the application provides a parking space detection method, which can be applied to front-end equipment, for example, the front-end equipment acquires a scene image and then realizes parking space detection based on the scene image, and the method can also be applied to management equipment, for example, the front-end equipment acquires the scene image and then sends the scene image to the management equipment, and the management equipment realizes parking space detection based on the scene image. Referring to fig. 2, a schematic flow chart of the parking space detection method provided in the embodiment of the present application is shown, where the method may include:
step 201, obtaining a scene image sequence corresponding to a parking space area, where the scene image sequence may include M frames of scene images, M may be a positive integer, and for example, M may be a positive integer greater than 1.
Illustratively, when a parking space corresponding to a parking space area needs to be acquired, that is, when parking space detection is performed, M frames of scene images corresponding to the parking space area can be acquired, the M frames of scene images form a scene image sequence, a parking space detection method can be realized based on the scene image sequence, and the parking space detection method refers to the subsequent steps.
For example, because a parking space region of a roadside scene has a vehicle shielding problem, a single-frame scene image may not describe the entire parking space region, so that M-frame scene images corresponding to the parking space region may be obtained, the entire parking space region is described based on the M-frame scene images, and automatic parking space detection is achieved.
In a possible embodiment, when the parking space detection command for the parking space area is received, the parking space detection command indicates that the parking space detection needs to be performed on the parking space area, and the automatic parking space detection may be triggered, so that step 201 and subsequent steps are executed to implement the automatic parking space detection. The parking space detection command may be input by a user, or may be actively generated when a certain trigger mechanism is satisfied, which is not limited to this.
In another possible embodiment, when it is detected that a vehicle enters the parking space area, the automatic parking space detection may be triggered, so that step 201 and subsequent steps are executed to implement the automatic parking space detection. In the embodiment, the automatic parking space detection is triggered by the fact that the vehicle enters the parking space area.
In another possible embodiment, when it is detected that a vehicle leaves the parking space area, the automatic parking space detection may be triggered, so that step 201 and subsequent steps are executed to implement the automatic parking space detection. In the embodiment, the automatic parking space detection is triggered by the vehicle leaving the parking space area.
In another possible embodiment, when it is detected that a vehicle enters the parking space area or a vehicle leaves the parking space area, the automatic parking space detection may be triggered, that is, step 201 and subsequent steps are executed, so as to implement the automatic parking space detection. Or, when a parking space detection command for the parking space area is received, the automatic parking space detection may be triggered, that is, step 201 and subsequent steps are executed, so as to implement the automatic parking space detection.
Step 202, determining corner features of a plurality of initial corners of the parking space area based on the scene image sequence corresponding to the parking space area. For example, for each initial corner point, the corner point features of the initial corner point may include, but are not limited to: the position, shape and extending direction of the corner point are T-shaped or L-shaped, the corner point with the T-shaped shape can be called as a T-shaped corner point, and the corner point with the L-shaped shape can be called as an L-shaped corner point.
For example, for each initial angular point, a position corresponding to the initial angular point is an intersection point of a lane line and a parking space line, in practical application, the lane line is a linear mark with a certain width, the parking space line is also a linear mark with a certain width, the lane line and the parking space line intersect to form a square intersection point region, and a center of the intersection point region may be used as the position corresponding to the initial angular point. Referring to fig. 3A, lane lines and parking lines of an actual scene are shown. Referring to fig. 3B, a schematic diagram of an intersection region of a lane line and a vehicle location line is shown, the lane line is a line located at an edge of a lane, the vehicle location line is a line located inside the lane, a center of the intersection region of the lane line and the vehicle location line is a position corresponding to an initial corner point, the position is denoted as (x, y), x represents an abscissa of the initial corner point in a scene image, and y represents an ordinate of the initial corner point in the scene image.
For example, the corner points may be divided into two types, i.e. T-shaped and L-shaped, according to the shape of the corner point pattern, and the shape corresponding to each initial corner point may be T-shaped or L-shaped, as shown in fig. 3C, which shows the corner points of T-shaped or L-shaped, the left side is the T-shaped corner point, and the right side is the L-shaped corner point.
For example, for each initial corner point, the direction of the initial corner point may be defined as all extending directions with the position of the initial corner point as a starting point. It is clear that for a T-shaped corner there are three directions of extension and for an L-shaped corner there are two directions of extension, see fig. 3C showing three directions of extension for a T-shaped corner and two directions of extension for an L-shaped corner.
In this embodiment, because the front-end device of the roadside parking management system is erected at a high position and is affected by vehicles coming and going, a parking space in a scene image acquired by the front-end device is severely shielded, which results in that most of the corner points in a single-frame scene image are invisible, but because vehicles which are driving in and driving out can be shielded, the stay time of the vehicles is not too long, so that time sequence information can be effectively utilized, and a final stable initial corner point is obtained through a statistical result of multi-frame scene images, that is, a plurality of initial corner points (i.e., the corner point characteristics of each initial corner point) are determined based on M frames of scene images corresponding to a parking space region, thereby effectively reducing false detection by utilizing the time sequence information and detecting the corner points which are missed to be detected due to shielding of the single-frame scene images.
In a possible implementation manner, based on a scene image sequence (i.e., M frames of scene images) corresponding to a parking space region, the following steps may be adopted to determine corner features of a plurality of initial corners, and of course, the following manner is only an example, and is not limited thereto, as long as the corner features of the plurality of initial corners can be obtained.
Step 2021, inputting each frame of scene image to the corner detection model to obtain the corner features of the predicted corner and the confidence corresponding to the corner features. For example, the corner feature may include a position, a shape, and an extending direction, and the confidence corresponding to the corner feature may include a confidence corresponding to the position of the predicted corner, a confidence corresponding to the shape of the predicted corner, and a confidence corresponding to the extending direction of the predicted corner.
For example, a corner detection model may be trained in advance, the corner detection model may be a network model using a deep learning algorithm, or may be a network model using another algorithm, and the type of the corner detection model is not limited as long as the corner detection model can output corner features.
For example, the corner detection model includes 4 sub-networks, the 1 st sub-network is configured to process a scene image to obtain a feature vector corresponding to the scene image, and the feature vector is input to the 2 nd sub-network, the 3 rd sub-network, and the 4 th sub-network. And the 2 nd sub-network is used for processing the feature vector to obtain the position of the prediction corner point and the confidence corresponding to the position. The 3 rd sub-network is used for processing the feature vector to obtain the shape of the corner point and the confidence corresponding to the shape. The 4 th sub-network is used for processing the feature vector to obtain the extending direction of the corner point to be predicted and the confidence degree corresponding to the extending direction.
For another example, the corner detection model includes 3 sub-networks, where the 1 st sub-network is configured to process the scene image to obtain a feature vector corresponding to the scene image, and input the feature vector to the 2 nd sub-network and the 3 rd sub-network. The 2 nd sub-network is used for processing the feature vectors to obtain the positions of the predicted corner points and the confidence degrees corresponding to the positions, the shapes of the predicted corner points and the confidence degrees corresponding to the shapes. The 3 rd sub-network is used for processing the feature vectors to obtain the extending direction of the corner point to be predicted and the confidence degree corresponding to the extending direction.
For another example, the corner detection model includes 3 sub-networks, where the 1 st sub-network is configured to process the scene image to obtain a feature vector corresponding to the scene image, and input the feature vector to the 2 nd sub-network and the 3 rd sub-network. And the 2 nd sub-network is used for processing the feature vectors to obtain the positions of the prediction corner points and the confidence degrees corresponding to the positions. The 3 rd sub-network is used for processing the feature vectors to obtain the shape of the predicted corner point and the confidence corresponding to the shape, the extension direction of the predicted corner point and the confidence corresponding to the extension direction.
Of course, the above are only examples, and the structure of the corner detection model is not limited as long as the corner detection model can output the position of the predicted corner and the confidence corresponding to the position, the shape of the predicted corner and the confidence corresponding to the shape, and the extending direction of the predicted corner and the confidence corresponding to the extending direction.
In a possible implementation manner, the structure of the corner detection model may be as shown in fig. 4, for example, the corner detection model may be a regression task model using a deep learning algorithm, that is, the corner detection task is regarded as a regression task, and the corner detection task is implemented using the regression task model, and the corner detection model can predict the positions, shapes, and extending directions of all visible corners in the input image.
Referring to fig. 4, the corner detection model may include a sub-network 1, a sub-network 2, and a sub-network 3, and the scene image may be input to the sub-network 1, the sub-network 1 processes the scene image to obtain a feature vector corresponding to the scene image, and the feature vector is input to the sub-networks 2 and 3. Each cell (e.g., m × n cell) of the scene image may correspond to one feature vector, that is, subnetwork 1 may obtain a plurality of feature vectors, and input each feature vector to subnetwork 2 and subnetwork 3.
For each feature vector, the sub-network 2 processes the feature vector to obtain the confidence of the feature vector corresponding to the corner point. If the confidence is not greater than the threshold, it indicates that the cell corresponding to the feature vector is not the corner point of prediction, and ends the processing procedure of the feature vector. If the confidence coefficient is greater than the threshold value, the cell corresponding to the feature vector is a prediction corner point, and the feature vector is continuously processed.
The sub-network 2 processes the feature vector to obtain the position (x, y) of the predicted corner point, wherein x represents the abscissa in the scene image, y represents the ordinate in the scene image, and the confidence corresponding to the position (x, y); and obtaining the shape (such as T-shape or L-shape) of the predicted corner point and the confidence corresponding to the shape.
The sub-network 3 processes the feature vector to obtain the extending directions of the predicted corner points (e.g. three extending directions of the T-shaped corner points and two extending directions of the L-shaped corner points) and the confidence degrees corresponding to the extending directions.
In summary, after the above-mentioned processing is performed on each feature vector, the positions, shapes, and extending directions of all the prediction corner points in the scene image can be output. After each frame of scene image is processed, the positions, shapes and extension directions of all the prediction corner points in each frame of scene image can be obtained.
Step 2022, determining corner features of a plurality of initial corners of the parking space region based on the corner features of the predicted corners corresponding to the M frames of scene images. For each predicted corner, if the predicted corner corresponds to at least N frames of scene images, and N is less than or equal to M, the predicted corner may be determined as an initial corner, and a corner feature with a maximum confidence corresponding to the predicted corner is determined as a corner feature of the initial corner.
For example, assume that M is 10, N is 3, the scene image 1 corresponds to the prediction corner a1 and the prediction corner a2, the scene image 2 corresponds to the prediction corner a1 (when the position of the prediction corner in the scene image 2 is the same as or close to the position of the prediction corner in the scene image 1, it means that the prediction corner and the prediction corner are the same prediction corner) and the prediction corner a3, and so on, the scene image 10 corresponds to the prediction corner a1 and the prediction corner a 2. If the prediction corner a1 corresponds to 6 frames of scene images (that is, the prediction corners corresponding to the 6 frames of scene images all include the prediction corner a1), it indicates that the number of times of occurrence of the prediction corner a1 is large, and the prediction corner a1 may be used as an initial corner. If the predicted corner a2 corresponds to 1 frame of scene image, it means that the predicted corner a2 appears less frequently and may be a false-detected predicted corner, and therefore, the predicted corner a2 is not used as the initial corner, and so on.
In summary, based on each prediction corner corresponding to all scene images, the prediction corner may be used as an initial corner or not, so as to obtain a plurality of initial corners corresponding to the parking space area.
For each predicted corner, if the predicted corner is used as an initial corner, the corner feature with the maximum confidence corresponding to the predicted corner may be determined as the corner feature of the initial corner. For example, taking the predicted corner a1 as an example, assuming that the predicted corner a1 corresponds to 6 scene images, the predicted corner a1 corresponds to 6 sets of corner features, each set of corner features includes a position, a shape and an extension direction, the 6 sets of corner features correspond to confidence degrees corresponding to 6 positions, a maximum confidence degree is selected from the confidence degrees corresponding to 6 positions, the set of corner features corresponding to the maximum confidence degree is the corner feature of the predicted corner a1, that is, the corner feature of the initial corner, and the corner feature of the initial corner may include a position, a shape and an extension direction.
In summary, the corner feature of the plurality of initial corners of the parking space region can be obtained.
In a possible embodiment, referring to fig. 5, based on a scene image sequence corresponding to a parking space region, the following steps may be adopted to determine corner features of a plurality of initial corners of the parking space region:
and 511, acquiring a current calling corner detection result. For example, the scene image is input to the corner detection model, and the corner features of the predicted corner and the confidence corresponding to the corner features are obtained. The current call corner detection result may include a corner feature of at least one predicted corner and a confidence corresponding to the corner feature.
And step 512, judging whether the predicted corner points are matched with the historical detection result or not aiming at each predicted corner point. If not, step 513 may be performed, and if so, step 514 may be performed. For example, if the position of the predicted corner is the same as or close to the position of a predicted corner in the historical detection results, it indicates that the predicted corner matches the historical detection results, and if the position of the predicted corner has a large deviation from the positions of all the predicted corners in the historical detection results, it indicates that the predicted corner does not match the historical detection results.
Step 513, if the predicted corner does not match the historical detection result, adding the predicted corner to the historical detection result, that is, the predicted corner becomes the historical detection result to participate in the subsequent comparison.
And step 514, updating the corner feature of the predicted corner, and updating the stable calling times of the predicted corner.
For example, it is assumed that the confidence degree corresponding to the position of the predicted corner in the history detection result is s1, and the confidence degree corresponding to the position of the predicted corner in the current calling corner detection result is s2, if the confidence degree s1 is greater than or equal to the confidence degree s2, the corner feature of the predicted corner in the history detection result may be retained, and if the confidence degree s1 is less than the confidence degree s2, the corner feature of the predicted corner in the current calling corner detection result may be updated to the history detection result, that is, the corner feature in the history detection result is replaced.
For example, if the predicted corner matches the historical detection result, the number of stable calls of the predicted corner may be increased by 1, which indicates that the cumulative number of occurrences of the predicted corner is increased by 1.
And step 515, judging whether the final calling times M are reached. If not, the processing of the M frames of scene images is not completed, the step 511 is returned, and the current calling corner detection result is obtained based on the next frame of scene images. If so, indicating that the processing of the M-frame scene image is completed, step 516 is performed.
And 516, outputting the stable corner information. For example, for each predicted corner in the historical detection result, if the number of stable calls of the predicted corner is greater than or equal to N, it indicates that the predicted corner is an initial corner, and outputs the corner feature of the initial corner, and if the number of stable calls of the predicted corner is less than N, it indicates that the predicted corner is a false-detected predicted corner, and no longer outputs the corner feature of the predicted corner.
And 203, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point.
For example, the parking space line direction and the lane line direction in the roadside parking scene are fixed values, for a T-shaped angular point, the parking space line direction and the lane line direction can be obtained only by one angular point, and for an L-shaped angular point, the parking space line direction and the lane line direction can be obtained by at least two angular points. On the basis, in the embodiment, one type can be selected to obtain the lane line direction and the parking space line direction according to the number of the T-shaped angular points and the L-shaped angular points, then, all the angular points can be verified reversely by using the predicted lane line direction and the predicted parking space line direction, some obvious false detections are deleted, and accordingly, the primarily predicted parking space is output.
For example, referring to fig. 6, the following steps may be adopted to determine the lane line direction (denoted as the lane line target direction) and the parking space line direction (denoted as the parking space line target direction) corresponding to the parking space region:
step 601, calculating the number of T-shaped initial corner points and the number of L-shaped initial corner points.
Step 602, determining whether the number of T-shaped initial corner points is greater than or equal to 1, and the number of T-shaped initial corner points is greater than or equal to half of the number of L-shaped initial corner points. If yes, go to step 603, otherwise go to step 604.
And step 603, determining a lane line target direction and a parking space line target direction based on the extension direction of the T-shaped angular point.
For example, 3 extending directions exist in the T-shaped corner point, and the 3 extending directions are denoted as a direction 1, a direction 2, and a direction 3, and if the direction 1 and the direction 2 are two opposite directions, the direction 1 or the direction 2 may be taken as a lane line target direction, and the direction 3 may be taken as a parking space line target direction.
And step 604, judging whether the number of the L-shaped initial corner points is more than or equal to 2. If not, the parking space cannot be generated, the process is ended, and the subsequent processing is not carried out. If so, step 605 may be performed.
And step 605, determining a lane line target direction and a parking space line target direction based on the extension direction of the L-shaped angular point.
For example, there are 2 extending directions for the L-shaped corner points, and at least two L-shaped corner points are needed, and 2 extending directions for the first L-shaped corner point are denoted as a direction 11 and a direction 12, and 2 extending directions for the second L-shaped corner point are denoted as a direction 21 and a direction 22. On this basis, assuming that the direction 11 and the direction 21 are two opposite directions, and the direction 12 and the direction 22 are the same, that is, the directions are the same, the direction 11 or the direction 21 may be regarded as the lane line target direction, and the direction 12 may be regarded as the lane line target direction.
In conclusion, the lane line target direction and the parking space line target direction can be obtained.
And 204, selecting a target angular point from the plurality of initial angular points based on the lane line target direction and the parking space line target direction. For each initial corner point, determining whether the initial corner point is a target corner point in the following way:
and if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking space line, selecting the initial angular point as a target angular point.
For example, for a T-shaped initial corner point, the T-shaped initial corner point corresponds to three extension directions, and if a certain extension direction is the same as a lane line target direction and another extension direction is the same as a parking space line target direction, the extension direction of the initial corner point is accurate, and the initial corner point can be selected as a target corner point.
And aiming at the L-shaped initial angular point, the L-shaped initial angular point corresponds to two extension directions, if one extension direction is the same as the target direction of the lane line, and the other extension direction is the same as the target direction of the parking space line, the extension direction of the initial angular point is accurate, and the initial angular point can be selected as the target angular point.
And if the extending direction corresponding to the initial angular point is not matched with the target direction of the lane line and/or the extending direction corresponding to the initial angular point is not matched with the target direction of the parking space line, not selecting the initial angular point as the target angular point.
For example, for a T-shaped initial corner point, the T-shaped initial corner point corresponds to three extension directions, and if each extension direction is different from a lane line target direction, or each extension direction is different from a parking space line target direction, the extension direction of the initial corner point is inaccurate, and the initial corner point is not selected as a target corner point.
And aiming at the L-shaped initial angular point, the L-shaped initial angular point corresponds to two extension directions, if each extension direction is different from the target direction of the lane line, or each extension direction is different from the target direction of the parking space line, the extension direction of the initial angular point is inaccurate, and the initial angular point is not selected as the target angular point.
Illustratively, after the above-mentioned processing is performed on each initial corner point, a target corner point can be obtained.
Step 205, determining K initial parking spaces corresponding to the parking space areas based on the angular point features of the target angular points, where K is a positive integer, that is, determining the initial parking spaces based on the angular point features of the target angular points.
For example, after the target angular points are obtained, that is, the position, shape and extending direction of each target angular point are known, the initial parking space may be determined based on the position, shape and extending direction of each target angular point. For example, a T-shaped target corner point corresponds to 3 extending directions, and 3 extending-direction lines are given from the position of the target corner point, where the 3 extending-direction lines are a lane line and a vehicle line. The L-shaped target corner points correspond to 2 extension directions, 2 extension direction lines are given from the positions of the target corner points, and the 2 extension direction lines are lane lines and vehicle location lines. Referring to fig. 7A, 4L-shaped target angular points are shown, and based on the positions, shapes and extending directions of the 4L-shaped target angular points, 2 initial parking spaces shown in fig. 7B can be obtained.
Of course, fig. 7A and 7B are only an example, and are not limited thereto, as long as the initial parking space can be generated based on the isocenter features of the position, the shape, and the extending direction of the target isocenter.
And step 206, carrying out parking space verification on the K initial parking spaces to obtain target parking spaces corresponding to the parking space areas. For example, for each initial parking space, if it is determined that the initial parking space is a normal parking space, the initial parking space is directly determined as a target parking space; if the initial parking space is determined to be the false detection parking space, deleting the initial parking space; and if the missed parking space exists in the parking space area is determined based on the initial parking space, generating at least two target parking spaces based on the initial parking space. On this basis, the target parking spaces corresponding to the parking space areas can be determined based on all the target parking spaces corresponding to the K initial parking spaces, that is, all the target parking spaces corresponding to the K initial parking spaces are finally output.
Exemplarily, because the factor of sheltering from exists, some angle points are all invisible in whole testing process to lead to the parking stall to miss examining, and some angle points of stabilizing the false retrieval can lead to the false retrieval of last parking stall, consequently, can carry out the parking stall check to initial parking stall, obtain the target parking stall. Because the front-end equipment is erected at a high position, the acquired image follows the perspective principle, namely, the parking space follows the characteristics of being large and small and the size of the parking space is in a certain proportion, so that the parking space can be checked for the initial parking space, the mistakenly-detected parking space is deleted, the missed-detected parking space is restored, and the target parking space is obtained.
In one possible embodiment, the process of parking space verification may include, but is not limited to:
in case 1, for each initial parking space, if the initial parking space includes two adjacent L-shaped angular points, and it is determined that the lane line directions of the two L-shaped angular points are opposite based on the extending directions of the two L-shaped angular points, it is determined that the initial parking space is a false detection parking space. For example, two L-shaped angular points that are adjacent in position and opposite in direction may respectively constitute an independent parking space, based on the principle, for two adjacent L-shaped angular points, if the two L-shaped angular points are opposite in direction along the lane line, it is indicated that the two L-shaped angular points may constitute one parking space, and if the two L-shaped angular points are opposite in direction along the lane line, it is indicated that the two L-shaped angular points are not associated, and the two L-shaped angular points will respectively constitute a parking space with other angular points, therefore, the parking space composed of the two L-shaped angular points is an erroneous detection parking space, that is, if the lane line direction of the two L-shaped angular points is opposite, the initial parking space composed of the two L-shaped angular points is an erroneous detection parking space. In this case, the initial slot may be deleted.
And 2, for each initial parking space, if the length of the lane line of the initial parking space is less than the product of the length of the lane line of the adjacent parking space and the first coefficient value, determining that the initial parking space is the false detection parking space, where the first coefficient value may be greater than 0 and less than 1, and for example, the first coefficient value is 0.5. If the length of the lane line of the initial parking space is greater than the product of the length of the lane line of the adjacent parking space and the second coefficient value, it is determined that the missed parking space exists in the parking space area, and the second coefficient value may be greater than 1, for example, the second coefficient value is 2. Obviously, in case 2, the false-detection initial parking space may be deleted and the missed-detection initial parking space may be segmented according to the proportion of the parking space lengths (e.g., the length of the lane line).
For example, all the initial parking spaces are sorted from far to near, and the lane line length of each initial parking space is gradually increased in a fixed proportion according to the priori knowledge of the perspective relation. And for each initial parking space, if the lane line length of the initial parking space is greater than twice of the lane line lengths of the other two initial parking spaces (namely the lane line length of the initial parking space is greater than the product of the lane line length of the adjacent parking space and the second coefficient value), judging that the initial parking space is the missed parking space. In this case, the initial parking space may be split (e.g., split at the middle position of the initial parking space), so as to obtain two target parking spaces. For each initial parking space, if the length of the lane line of the initial parking space is less than 0.5 times of the lengths of the lane lines of the other two initial parking spaces (namely the length of the lane line of the initial parking space is less than the product of the length of the lane line of the adjacent parking space and the first coefficient value), the initial parking space is judged to be the false detection parking space. In this case, the initial slot may be deleted. And traversing all the initial parking spaces through recursion to enable the initial parking spaces to meet the proportional relation.
And 3, if K is 1, only one initial parking space is determined, and the length of the lane line of the initial parking space is greater than the product of the height of the scene image and a third coefficient value, determining that a missed parking space exists in the parking space area, wherein the third coefficient value is greater than or equal to 0.5, and the third coefficient value is less than 1, and the third coefficient value can be 0.5.
For example, if only one initial parking space is detected, based on the prior knowledge that at least two parking spaces exist in one point location, if the initial parking space occupies more than 50% of the image size (that is, the length of the lane line of the initial parking space is greater than the product of the height of the scene image and the third coefficient value), it is determined that there is a missed parking space, and the initial parking space may be segmented (for example, segmented at the middle position of the initial parking space), so as to obtain two target parking spaces.
And 4, if K is 1, that is, only one initial parking space is determined, the length of the lane line of the initial parking space is less than the product of the height of the scene image and the fourth coefficient value, and the height of the upper side or the lower side of the initial parking space in the scene image is greater than the length of the lane line of the initial parking space, determining that a missed parking space exists in the parking space area, wherein the fourth coefficient value is less than or equal to 0.5, and the fourth coefficient value is greater than 0, and if the fourth coefficient value is 0.5.
For example, if only one initial parking space is detected, based on the prior knowledge that at least two parking spaces exist in one point location, the size of the space at two ends of the initial parking space along the lane line direction is determined, a new parking space is generated in the direction with a larger space, and the initial parking space and the new parking space are both used as target parking spaces, so that two target parking spaces are obtained. For example, if the direction of the larger space is the upper direction of the initial parking space, and the height of the upper side of the initial parking space is greater than the length of the lane line of the initial parking space, a new parking space is generated proportionally.
And step 207, generating an inner side boundary corresponding to each target parking space corresponding to the parking space region to obtain a target parking space with the inner side boundary, namely the final output target parking space.
For example, due to the characteristic of severe occlusion of the roadside scene, the inner boundary of the parking space is generally invisible, as shown in fig. 7B, and the target parking space has no inner boundary yet, in this case, the length of the parking space line corresponding to the target parking space may be determined, and after the length of the parking space line corresponding to the target parking space is obtained, the inner boundary corresponding to the target parking space may be generated, as shown in fig. 7C, which is the target parking space with the inner boundary.
Obviously, after obtaining the parking space line length corresponding to the target parking space, the inner side boundary corresponding to the target parking space may be generated, and in a possible implementation manner, the parking space line length may be determined by using the following conditions:
1, if a vehicle exists in a target parking space and the width of the vehicle is known, determining the length of a parking space line corresponding to the target parking space based on the width of the vehicle and a configured first proportional relation; the first proportional relation is used for representing the proportional relation between the vehicle width and the length of the parking space line.
For example, if there is a vehicle segmentation result (that is, the width of the vehicle is known) in the calling process, the vehicle is completely accommodated based on the prior knowledge of the vehicle according to the parking space size, and a perspective principle is combined, that is, the parking space and the vehicle are in a certain proportional relationship (that is, a first proportional relationship) in the sizes of different positions, so that the segmentation result of the vehicle at different positions is recorded while the angular point is detected in the time sequence, and an equation is constructed to calculate the length of the parking space line of the parking space at different positions. Since the vehicle can be distinguished whether the vehicle is in the parking space, if the generated parking space line length cannot completely contain the segmentation result of the vehicle, the parking space line length can be properly expanded.
Based on the principle, for each target parking space, the length of the parking space line corresponding to the target parking space can be determined based on the width of the vehicle in the target parking space and the first proportional relation, so that the width of the vehicle in the target parking space and the length of the parking space line corresponding to the target parking space meet the first proportional relation.
The method comprises the following steps that 2, if no vehicle exists in the target parking space or the vehicle exists in the target parking space but the width of the vehicle is not known, and both the two conditions indicate that the width of the vehicle cannot be known, the length of a parking space line corresponding to the target parking space is determined based on the length of a lane line of the target parking space and a configured second proportional relation; wherein, the second proportional relation represents the proportional relation between the length of the lane line and the length of the parking space line.
For example, if there is no vehicle segmentation result in the calling process (i.e., the width of the vehicle is not known), the optimal parking space line length equation (i.e., the proportional relationship equation between the lane line length and the parking space line length) may be fitted by counting a large number of roadside scene point locations (the counting process is performed before the implementation of the scheme, and the data in the counting process is stored), so that a default parking space line length may be obtained for different parking spaces.
Based on the principle, aiming at each target parking space, the parking space line length corresponding to the target parking space can be determined based on the lane line length of the target parking space and the second proportional relation, so that the lane line length of the target parking space and the parking space line length corresponding to the target parking space meet the second proportional relation.
As can be seen from the above technical solutions, in the embodiments of the present application, the angular point characteristics of a plurality of initial angular points may be determined based on a scene image sequence, a lane line target direction and a parking space line target direction may be determined based on an extending direction of a T-shaped angular point or an extending direction of an L-shaped angular point, and then the plurality of initial angular points may be screened through the lane line target direction and the parking space line target direction to obtain accurate and reliable target angular points, when the target parking space is determined based on the angular point characteristics of the target angular points, the accurate parking space condition is obtained, although the vehicle can block the parking space, all parking spaces in the parking space area can be detected, the situations of parking space false detection, parking space missing detection and the like are reduced, the automatic parking space detection can be realized, under a roadside parking scene, even if the parking space is seriously shielded, automatic parking space detection can be realized, automatic parking space line configuration is realized, and a parking space line does not need to be manually configured. The corner detection can be directly carried out on the original image (namely the scene image), and the parking space is restored through post-processing, so that the process is simpler and more convenient. The corner information can be obtained by adopting a deep learning model, the parameters do not need to be artificially designed, and the robustness of the model is higher. Even if the parking space line is seriously shielded, a complete quadrilateral parking space area can be generated. The characteristic that the front-end equipment is fixedly erected is combined, time sequence information is effectively utilized in the angular point detection process, stable angular points are obtained by combining multi-frame statistical results, and the influence of false detection on the generation of subsequent parking spaces is effectively reduced. And checking the generated parking spaces by combining the perspective principle and the geometric relation of the parking space sizes, recovering the missed parking spaces and deleting the false parking spaces. Aiming at the condition that the road edge at the inner side of the parking space is invisible, the length of the parking space line is generated by combining the segmentation result of the vehicle, so that the generated parking space can effectively contain the vehicle, and the subsequent processing logic of the product is met.
Based on the same application concept as the method, an embodiment of the present application provides a parking space detection device, which is shown in fig. 8 and is a schematic structural diagram of the parking space detection device, and the parking space detection device may include:
the determining module 81 is configured to determine, based on a scene image sequence corresponding to a parking space region, corner features of a plurality of initial corners of the parking space region, where the corner features include a position, a shape, and an extending direction, and the shape is a T shape or an L shape; determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point;
a selecting module 82, configured to select a target corner point from the multiple initial corner points; for each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking line, selecting the initial angular point as a target angular point, otherwise, forbidding to select the initial angular point as the target angular point; the determining module 81 is further configured to determine a target parking space corresponding to the parking space area based on the corner feature of the target corner.
For example, the determining module 81 is specifically configured to, based on a scene image sequence corresponding to a parking space region, determine corner features of a plurality of initial corners of the parking space region: if the scene image sequence comprises M frames of scene images, inputting the scene images to an angular point detection model aiming at each frame of scene images to obtain angular point characteristics of predicted angular points and confidence degrees corresponding to the angular point characteristics; determining the corner features of a plurality of initial corners of a parking space area based on the corner features of the prediction corners corresponding to the M frames of scene images; and for each prediction corner, if the prediction corner corresponds to at least N frames of scene images, determining the prediction corner as an initial corner, and determining the corner feature with the maximum confidence degree corresponding to the prediction corner as the corner feature of the initial corner.
For example, the determining module 81 is specifically configured to, based on the extending direction of the T-shaped angular point or the extending direction of the L-shaped angular point, determine a lane line target direction and a parking space line target direction corresponding to the parking space area: if the number of the T-shaped initial angular points is greater than or equal to 1 and the number of the T-shaped initial angular points is greater than or equal to half of the number of the L-shaped initial angular points, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular points; or if the number of the T-shaped initial angular points is less than 1 or the number of the T-shaped initial angular points is less than half of the number of the L-shaped initial angular points, and the number of the L-shaped initial angular points is greater than or equal to 2, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the L-shaped angular points.
For example, the determining module 81 is specifically configured to, when determining the target parking space corresponding to the parking space area based on the corner feature of the target corner: determining K initial parking spaces corresponding to the parking space areas based on the angular point characteristics of the target angular points, wherein K is a positive integer; for each initial parking space, then: if the initial parking space is determined to be a normal parking space, determining the initial parking space as a target parking space; if the initial parking space is determined to be the false detection parking space, deleting the initial parking space; if it is determined that the missed parking space exists in the parking space area based on the initial parking space, generating at least two target parking spaces based on the initial parking space; and determining the target parking spaces corresponding to the parking space areas based on all the target parking spaces corresponding to the K initial parking spaces.
Illustratively, the determining module 81 is further configured to: if the initial parking space comprises two adjacent L-shaped angular points and the lane line direction of the two L-shaped angular points is determined to be opposite based on the extension direction of the two L-shaped angular points, determining that the initial parking space is a false detection parking space; if the lane line length of the initial parking space is smaller than the product of the lane line length of the adjacent parking space and a first coefficient value, determining that the initial parking space is a false detection parking space, wherein the first coefficient value is larger than 0 and smaller than 1; if the length of the lane line of the initial parking space is larger than the product of the length of the lane line of the adjacent parking space and a second coefficient value, determining that the missed parking space exists in the parking space area, wherein the second coefficient value is larger than 1; if K is 1 and the length of the lane line of the initial parking space is greater than the product of the height of the scene image and a third coefficient value, determining that a missed parking space exists in the parking space area, wherein the third coefficient value is greater than or equal to 0.5; and if K is 1, the length of the lane line of the initial parking space is less than the product of the height of the scene image and the fourth coefficient value, and the height of the upper side or the lower side of the initial parking space in the scene image is greater than the length of the lane line of the initial parking space, determining that the missed parking space exists in the parking space area, wherein the fourth coefficient value is less than or equal to 0.5.
Illustratively, the determining module 81 is further configured to: aiming at each target parking space, the length of a parking space line corresponding to the target parking space is determined in the following mode: if the target parking space is provided with the vehicle and the width of the vehicle is obtained, determining the length of a parking space line corresponding to the target parking space based on the width of the vehicle and the configured first proportional relation; the first proportional relation represents a proportional relation between the width of the vehicle and the length of the parking space line; if the vehicle does not exist in the target parking space, or the vehicle exists in the target parking space but the width of the vehicle is not known, determining the length of the parking space line corresponding to the target parking space based on the length of the lane line of the target parking space and a configured second proportional relation, wherein the second proportional relation represents the proportional relation between the length of the lane line and the length of the parking space line.
For example, the determining module 81 is specifically configured to, based on a scene image sequence corresponding to a parking space region, determine corner features of a plurality of initial corners of the parking space region: when a parking space detection command for the parking space area is received, determining angular point characteristics of a plurality of initial angular points of the parking space area based on a scene image sequence corresponding to the parking space area; or when a vehicle enters the parking space area, determining corner point characteristics of a plurality of initial corner points of the parking space area based on a scene image sequence corresponding to the parking space area; or when the parking space area is detected to have the vehicle leave, determining the corner feature of the initial corners of the parking space area based on the scene image sequence corresponding to the parking space area.
Based on the same application concept as the method, the embodiment of the present application provides a parking space detection device, as shown in fig. 9, where the parking space detection device includes: a processor 91 and a machine-readable storage medium 92, the machine-readable storage medium 92 storing machine-executable instructions executable by the processor 91; the processor 91 is configured to execute machine executable instructions to implement the parking space detection method disclosed in the above example of the present application.
Based on the same application concept as the method, an embodiment of the present application further provides a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the parking space detection method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A parking space detection method is characterized by comprising the following steps:
determining corner point features of a plurality of initial corner points of a parking space area based on a scene image sequence corresponding to the parking space area, wherein the corner point features comprise positions, shapes and extension directions, and the shapes are T-shaped or L-shaped;
determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point, and selecting a target angular point from the plurality of initial angular points; for each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking line, selecting the initial angular point as a target angular point;
and determining a target parking space corresponding to the parking space area based on the corner characteristic of the target corner.
2. The method according to claim 1, wherein the determining corner features of a plurality of initial corners of a parking space region based on a scene image sequence corresponding to the parking space region comprises:
if the scene image sequence comprises M frames of scene images, inputting the scene images to a corner detection model aiming at each frame of scene images to obtain corner features of predicted corners and confidence degrees corresponding to the corner features;
determining the corner features of a plurality of initial corners of a parking space area based on the corner features of the prediction corners corresponding to the M frames of scene images; and for each prediction corner, if the prediction corner corresponds to at least N frames of scene images, and N is less than or equal to M, determining the prediction corner as an initial corner, and determining the corner feature with the maximum confidence degree corresponding to the prediction corner as the corner feature of the initial corner.
3. The method of claim 1,
the determining of the lane line target direction and the parking space line target direction corresponding to the parking space area based on the extending direction of the T-shaped angular point or the extending direction of the L-shaped angular point comprises the following steps:
if the number of the T-shaped initial angular points is greater than or equal to 1 and the number of the T-shaped initial angular points is greater than or equal to half of the number of the L-shaped initial angular points, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular points; or the like, or, alternatively,
and if the number of the T-shaped initial angular points is less than 1 or the number of the T-shaped initial angular points is less than half of the number of the L-shaped initial angular points, and the number of the L-shaped initial angular points is more than or equal to 2, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the L-shaped angular points.
4. The method according to claim 1, wherein the determining the target parking space corresponding to the parking space area based on the corner feature of the target corner point comprises:
determining K initial parking spaces corresponding to the parking space areas based on the angular point characteristics of the target angular points, wherein K is a positive integer; for each initial parking space, then:
if the initial parking space is determined to be a normal parking space, determining the initial parking space as a target parking space; if the initial parking space is determined to be the false detection parking space, deleting the initial parking space; if it is determined that the missed parking space exists in the parking space area based on the initial parking space, generating at least two target parking spaces based on the initial parking space;
and determining the target parking spaces corresponding to the parking space areas based on all the target parking spaces corresponding to the K initial parking spaces.
5. The method of claim 4, further comprising:
if the initial parking space comprises two adjacent L-shaped angular points and the lane line direction of the two L-shaped angular points is determined to be opposite based on the extension direction of the two L-shaped angular points, determining that the initial parking space is a false detection parking space;
if the lane line length of the initial parking space is smaller than the product of the lane line length of the adjacent parking space and a first coefficient value, determining that the initial parking space is a false detection parking space, wherein the first coefficient value is larger than 0 and smaller than 1;
if the length of the lane line of the initial parking space is larger than the product of the length of the lane line of the adjacent parking space and a second coefficient value, determining that the missed parking space exists in the parking space area, wherein the second coefficient value is larger than 1;
if K is 1 and the length of the lane line of the initial parking space is greater than the product of the height of the scene image and a third coefficient value, determining that a missed parking space exists in the parking space area, wherein the third coefficient value is greater than or equal to 0.5;
and if K is 1, the length of the lane line of the initial parking space is less than the product of the height of the scene image and the fourth coefficient value, and the height of the upper side or the lower side of the initial parking space in the scene image is greater than the length of the lane line of the initial parking space, determining that the missed parking space exists in the parking space area, wherein the fourth coefficient value is less than or equal to 0.5.
6. The method of claim 4, further comprising:
aiming at each target parking space, the length of a parking space line corresponding to the target parking space is determined in the following mode:
if the target parking space is provided with the vehicle and the width of the vehicle is obtained, determining the length of a parking space line corresponding to the target parking space based on the width of the vehicle and the configured first proportional relation; the first proportional relation represents the proportional relation between the width of the vehicle and the length of the parking space line;
if the vehicle does not exist in the target parking space, or the vehicle exists in the target parking space but the width of the vehicle is not known, determining the length of the parking space line corresponding to the target parking space based on the length of the lane line of the target parking space and a configured second proportional relation, wherein the second proportional relation represents the proportional relation between the length of the lane line and the length of the parking space line.
7. The method according to claim 1, wherein the determining corner features of a plurality of initial corners of a parking space region based on a scene image sequence corresponding to the parking space region comprises:
when a parking space detection command for the parking space area is received, determining angular point characteristics of a plurality of initial angular points of the parking space area based on a scene image sequence corresponding to the parking space area; alternatively, the first and second electrodes may be,
when the fact that vehicles enter the parking space area is detected, determining corner point characteristics of a plurality of initial corner points of the parking space area based on a scene image sequence corresponding to the parking space area; alternatively, the first and second electrodes may be,
when the fact that the vehicle leaves the parking space area is detected, determining corner point features of a plurality of initial corner points of the parking space area based on a scene image sequence corresponding to the parking space area.
8. The utility model provides a parking stall detection device which characterized in that, the device includes:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining corner point characteristics of a plurality of initial corner points of a parking space area based on a scene image sequence corresponding to the parking space area, the corner point characteristics comprise positions, shapes and extending directions, and the shapes are T-shaped or L-shaped; determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular point or the extension direction of the L-shaped angular point;
a selecting module, configured to select a target corner from the multiple initial corners; for each initial angular point, if the extending direction corresponding to the initial angular point is matched with the target direction of the lane line and the extending direction corresponding to the initial angular point is matched with the target direction of the parking line, selecting the initial angular point as a target angular point;
the determining module is further configured to determine a target parking space corresponding to the parking space area based on the corner feature of the target corner.
9. The apparatus according to claim 8, wherein the determining module is configured to determine, based on the sequence of scene images corresponding to the parking space region, the corner feature of the initial corners of the parking space region, specifically: if the scene image sequence comprises M frames of scene images, inputting the scene images to a corner detection model aiming at each frame of scene images to obtain corner features of predicted corners and confidence degrees corresponding to the corner features; determining the corner features of a plurality of initial corners of a parking space area based on the corner features of the prediction corners corresponding to the M frames of scene images; for each prediction corner, if the prediction corner corresponds to at least N frames of scene images, and N is less than or equal to M, determining the prediction corner as an initial corner, and determining the corner feature with the maximum confidence degree corresponding to the prediction corner as the corner feature of the initial corner;
the determining module is specifically configured to determine a lane line target direction and a parking space line target direction corresponding to the parking space region based on an extending direction of the T-shaped angular point or an extending direction of the L-shaped angular point: if the number of the T-shaped initial angular points is greater than or equal to 1 and the number of the T-shaped initial angular points is greater than or equal to half of the number of the L-shaped initial angular points, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the T-shaped angular points; or if the number of the T-shaped initial angular points is less than 1 or the number of the T-shaped initial angular points is less than half of the number of the L-shaped initial angular points, and the number of the L-shaped initial angular points is greater than or equal to 2, determining a lane line target direction and a parking space line target direction corresponding to the parking space area based on the extension direction of the L-shaped angular points;
the determining module is specifically configured to, when determining the target parking space corresponding to the parking space region based on the corner feature of the target corner: determining K initial parking spaces corresponding to the parking space areas based on the angular point characteristics of the target angular points, wherein K is a positive integer; for each initial parking space, then: if the initial parking space is determined to be a normal parking space, determining the initial parking space as a target parking space; if the initial parking space is determined to be the false detection parking space, deleting the initial parking space; if it is determined that the missed parking space exists in the parking space area based on the initial parking space, generating at least two target parking spaces based on the initial parking space; determining target parking spaces corresponding to the parking space areas based on all the target parking spaces corresponding to the K initial parking spaces;
wherein the determining module is further configured to: if the initial parking space comprises two adjacent L-shaped angular points and the lane line direction of the two L-shaped angular points is determined to be opposite based on the extension direction of the two L-shaped angular points, determining that the initial parking space is a false detection parking space; if the lane line length of the initial parking space is smaller than the product of the lane line length of the adjacent parking space and a first coefficient value, determining that the initial parking space is a false detection parking space, wherein the first coefficient value is larger than 0 and smaller than 1; if the length of the lane line of the initial parking space is larger than the product of the length of the lane line of the adjacent parking space and a second coefficient value, determining that the missed parking space exists in the parking space area, wherein the second coefficient value is larger than 1; if K is 1 and the length of the lane line of the initial parking space is greater than the product of the height of the scene image and a third coefficient value, determining that a missed parking space exists in the parking space area, wherein the third coefficient value is greater than or equal to 0.5; if K is 1, the length of the lane line of the initial parking space is less than the product of the height of the scene image and a fourth coefficient value, and the height of the upper side or the lower side of the initial parking space in the scene image is greater than the length of the lane line of the initial parking space, determining that a missed parking space exists in the parking space area, wherein the fourth coefficient value is less than or equal to 0.5;
wherein the determining module is further configured to: aiming at each target parking space, the length of a parking space line corresponding to the target parking space is determined in the following mode: if the target parking space is provided with the vehicle and the width of the vehicle is obtained, determining the length of a parking space line corresponding to the target parking space based on the width of the vehicle and the configured first proportional relation; the first proportional relation represents the proportional relation between the width of the vehicle and the length of the parking space line; if the target parking space does not have a vehicle, or the target parking space has a vehicle but does not know the width of the vehicle, determining the length of the parking space line corresponding to the target parking space based on the length of the lane line of the target parking space and a configured second proportional relation, wherein the second proportional relation represents the proportional relation between the length of the lane line and the length of the parking space line;
the determining module is specifically configured to, when determining the corner feature of the initial corners of the parking space area based on the scene image sequence corresponding to the parking space area: when a parking space detection command for the parking space area is received, determining angular point characteristics of a plurality of initial angular points of the parking space area based on a scene image sequence corresponding to the parking space area; or when a vehicle enters the parking space area, determining corner point characteristics of a plurality of initial corner points of the parking space area based on a scene image sequence corresponding to the parking space area; or when the parking space area is detected to have the vehicle leave, determining the corner feature of the initial corners of the parking space area based on the scene image sequence corresponding to the parking space area.
10. The utility model provides a parking stall check out test set which characterized in that includes: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to perform the method steps of any of claims 1-7.
CN202111566301.7A 2021-12-20 2021-12-20 Parking space detection method, device and equipment Pending CN114155740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111566301.7A CN114155740A (en) 2021-12-20 2021-12-20 Parking space detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111566301.7A CN114155740A (en) 2021-12-20 2021-12-20 Parking space detection method, device and equipment

Publications (1)

Publication Number Publication Date
CN114155740A true CN114155740A (en) 2022-03-08

Family

ID=80452001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111566301.7A Pending CN114155740A (en) 2021-12-20 2021-12-20 Parking space detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN114155740A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115206130A (en) * 2022-07-12 2022-10-18 合众新能源汽车有限公司 Parking space detection method, system, terminal and storage medium
CN116625707A (en) * 2023-05-18 2023-08-22 襄阳达安汽车检测中心有限公司 APA test method, storage medium, electronic equipment and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130266187A1 (en) * 2012-04-06 2013-10-10 Xerox Corporation Video-based method for parking angle violation detection
CN108875911A (en) * 2018-05-25 2018-11-23 同济大学 One kind is parked position detecting method
CN113468991A (en) * 2021-06-21 2021-10-01 沈阳工业大学 Parking space detection method based on panoramic video
CN113762272A (en) * 2021-09-10 2021-12-07 北京精英路通科技有限公司 Road information determination method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130266187A1 (en) * 2012-04-06 2013-10-10 Xerox Corporation Video-based method for parking angle violation detection
CN108875911A (en) * 2018-05-25 2018-11-23 同济大学 One kind is parked position detecting method
CN113468991A (en) * 2021-06-21 2021-10-01 沈阳工业大学 Parking space detection method based on panoramic video
CN113762272A (en) * 2021-09-10 2021-12-07 北京精英路通科技有限公司 Road information determination method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115206130A (en) * 2022-07-12 2022-10-18 合众新能源汽车有限公司 Parking space detection method, system, terminal and storage medium
CN115206130B (en) * 2022-07-12 2023-07-18 合众新能源汽车股份有限公司 Parking space detection method, system, terminal and storage medium
CN116625707A (en) * 2023-05-18 2023-08-22 襄阳达安汽车检测中心有限公司 APA test method, storage medium, electronic equipment and system

Similar Documents

Publication Publication Date Title
KR102155182B1 (en) Video recording method, server, system and storage medium
US8995714B2 (en) Information creation device for estimating object position and information creation method and program for estimating object position
CN108985162A (en) Object real-time tracking method, apparatus, computer equipment and storage medium
CN103093212B (en) The method and apparatus of facial image is intercepted based on Face detection and tracking
CN114155740A (en) Parking space detection method, device and equipment
CN110659658B (en) Target detection method and device
CN110991311A (en) Target detection method based on dense connection deep network
CN109902619B (en) Image closed loop detection method and system
CN111932596B (en) Method, device and equipment for detecting camera occlusion area and storage medium
CN113792586A (en) Vehicle accident detection method and device and electronic equipment
CN109034100B (en) Face pattern detection method, device, equipment and storage medium
CN110647818A (en) Identification method and device for shielding target object
CN113191318A (en) Target detection method and device, electronic equipment and storage medium
CN109102026A (en) A kind of vehicle image detection method, apparatus and system
CN111695627A (en) Road condition detection method and device, electronic equipment and readable storage medium
CN110008802B (en) Method and device for selecting target face from multiple faces and comparing face recognition
CN113435370B (en) Method and device for acquiring vehicle queuing length based on image feature fusion
CN113450575B (en) Management method and device for roadside parking
CN111724607B (en) Steering lamp use detection method and device, computer equipment and storage medium
CN112150538A (en) Method and device for determining vehicle pose in three-dimensional map construction process
CN113256683A (en) Target tracking method and related equipment
KR102260556B1 (en) Deep learning-based parking slot detection method and apparatus integrating global and local information
CN111291716B (en) Sperm cell identification method, sperm cell identification device, computer equipment and storage medium
CN110866484B (en) Driver face detection method, computer device and computer readable storage medium
WO2019228654A1 (en) Method for training a prediction system and system for sequence prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination