CN115222767B - Tracking method and system based on space parking space - Google Patents

Tracking method and system based on space parking space Download PDF

Info

Publication number
CN115222767B
CN115222767B CN202210379055.2A CN202210379055A CN115222767B CN 115222767 B CN115222767 B CN 115222767B CN 202210379055 A CN202210379055 A CN 202210379055A CN 115222767 B CN115222767 B CN 115222767B
Authority
CN
China
Prior art keywords
parking space
space
parking
candidate
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210379055.2A
Other languages
Chinese (zh)
Other versions
CN115222767A (en
Inventor
钟力阳
何俏君
李梓龙
付颖
张志德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202210379055.2A priority Critical patent/CN115222767B/en
Publication of CN115222767A publication Critical patent/CN115222767A/en
Application granted granted Critical
Publication of CN115222767B publication Critical patent/CN115222767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a tracking method based on space parking spaces, which comprises the following steps: acquiring information of each space parking space detected at the current moment, and projecting the information into a grid map of the vehicle; acquiring point cloud information of the periphery of each space parking space; acquiring parking space characteristics corresponding to each space parking space based on the point cloud information of each space parking space; generating at least one candidate parking place in the vehicle grid map according to the parking place characteristics corresponding to each space parking place; and carrying out feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result obtained at the previous moment, and obtaining the parking space tracking result at the current moment according to the matching result. The invention also discloses a corresponding system. By implementing the method and the device, the parking space tracking can be performed based on the space parking space, the method and the device have the characteristics of low calculation cost and quick and accurate calculation, and the driving experience of a user is improved.

Description

Tracking method and system based on space parking space
Technical Field
The invention relates to the field of intelligent driving perception, in particular to a tracking method and system based on space parking spaces.
Background
For automatic driving functions in the scenes of a parking lot such as automatic parking, the perception capability of vehicles to parking spaces around the parking lot is very important, and the success rate and the accuracy of parking are directly determined.
The classification of the parking spaces can be divided into two types of line parking spaces and space parking spaces on the whole, wherein the line parking spaces refer to the parking spaces containing the parking space lines, and the space parking spaces refer to the parking spaces which do not contain the parking space lines or have severe blurring or damage of the parking space lines. For many domestic parking lots, due to the problems of insufficient planning of early construction, higher maintenance cost of later stage and the like, many parking spaces belong to space parking spaces. Because the traditional parking space tracking algorithm based on the image acquired by the camera is very dependent on the parking space line characteristics, the algorithm cannot be effectively applied to tracking of space parking spaces. In contrast to a camera, the millimeter wave radar is used for tracking the parking space, so that the influence of severe blurring or damage of a parking space line on a tracking algorithm can be effectively avoided, the quality of a parking space tracking result is guaranteed, but the common tracking algorithm is performed based on the visual features of the parking space, so that corresponding feature extraction and model updating are required to be supported by high computational power, and generalization capability is poor.
Disclosure of Invention
The technical problem to be solved by the invention is to provide the space-based parking space tracking method and system, which can track the parking space based on the space parking space, and have the characteristics of low calculation cost, rapid and accurate calculation and can improve driving experience.
In order to solve the above technical problems, as one aspect of the present invention, a space-based parking space tracking method is provided, which includes the following steps:
step S10, acquiring information of each space parking space detected at the current moment in a vehicle driving path, and projecting the information into a grid map of the vehicle;
step S11, according to the acquired space parking space information, acquiring point cloud information around each space parking space;
step S12, based on point cloud information of each space parking space, acquiring parking space characteristics corresponding to each space parking space, wherein the parking space characteristics at least comprise first type characteristics and second type characteristics;
step S13, generating at least one candidate parking place in the vehicle grid map according to the parking place characteristics corresponding to each space parking place;
and S14, carrying out feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result obtained at the previous moment, and obtaining the parking space tracking result at the current moment according to the matching result.
Wherein, the step S10 further includes:
acquiring information of each space parking space detected at the current moment in a vehicle driving path;
projecting the space parking space information to a grid map of the vehicle, and forming corresponding rectangular frames under a grid map coordinate system of the vehicle, wherein each rectangular frame comprises coordinates of four vertexes;
and calculating the distance from the space parking space to the vehicle according to the coordinates of each rectangular frame, and sequencing each space parking space from near to far according to the length of the distance.
Wherein, the step S11 further includes:
acquiring all point cloud information in a vehicle grid map range by utilizing a millimeter wave radar;
and according to the obtained coordinates of the rectangular frame corresponding to each space parking space, the point cloud data of the corresponding parking space area in the grid map are mined out and stored.
Wherein, the step S12 further includes:
calculating first type features corresponding to each space parking space according to point cloud data corresponding to each space parking space, wherein the first type features comprise aspect ratio and gradient ratio of the space parking space;
inputting point cloud data corresponding to each space parking space into a pre-trained feature extraction network to obtain a plurality of second class features corresponding to each space parking space; the feature extraction network comprises 3 convolution layers, 3 pooling layers and 1 full connection layer, wherein the output layer is a softmax layer, and each second type of output feature is a numerical value between 0 and 1.
Before the point cloud data corresponding to each space parking space is input to a pre-trained feature extraction network, the method further comprises the following steps:
down-sampling the point cloud data of each space parking space according to a preset rule;
and cutting and sequencing the downsampled data obtained in the previous step to finally obtain a two-dimensional point cloud array with the size of 128 x 128, wherein the two-dimensional point cloud array is used as the input of the feature extraction network.
Wherein, the step S13 further includes:
inputting each first type of feature and each second type of feature corresponding to all parking space spaces into a particle filter, generating Gaussian distribution random numbers by using a Monte Carlo method according to the state and the weight of the previous moment particle swarm, forming the position and the state of a new particle swarm, and carrying out KNN clustering treatment to obtain the position and the state of a candidate parking space, wherein the position comprises four vertex coordinates of the candidate parking space under a vehicle grid map, and the state represents the confidence level of the candidate parking space;
and acquiring the first type of features and the second type of features corresponding to each candidate parking space.
Wherein, the step S14 further includes:
carrying out pairwise matching processing on the parking space characteristics corresponding to each candidate parking space and the parking space characteristics of each detected parking space in the parking space tracking result obtained at the previous moment, calculating the sum of variances among the corresponding parking space characteristics, and judging that the matching is successful if the sum is larger than a preset matching threshold;
for the same detected parking space, if two or more candidate parking spaces successfully paired exist, selecting the highest matching calculation result as a tracking result of the detected parking space; if only one candidate parking space successfully paired exists, directly selecting the candidate parking space as a tracking result of the detected parking space; if the candidate parking spaces which are successfully paired are not available, the detected parking spaces are considered to be unavailable at the current moment, so that parking operation cannot be carried out;
and obtaining a parking space tracking result at the current moment and displaying the parking space tracking result in the vehicle.
In another aspect of the present invention, there is also provided a space-based parking space tracking system, including:
the space parking space information acquisition unit is used for acquiring the information of each space parking space detected at the current moment in the vehicle driving path and projecting the information into a grid map of the vehicle;
the point cloud information acquisition unit is used for acquiring point cloud information around each space parking space according to the acquired space parking space information;
the parking space feature acquisition unit is used for acquiring parking space features corresponding to each space parking space based on point cloud information of each space parking space, and the parking space features at least comprise first type features and second type features;
the candidate parking space acquisition unit is used for generating at least one candidate parking space in the vehicle grid map according to the parking space characteristics corresponding to each space parking space;
the parking space tracking result obtaining unit is used for carrying out feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result obtained at the previous moment, and obtaining the parking space tracking result at the current moment according to the matching result.
Wherein, space parking stall information acquisition unit further includes:
the first acquisition unit is used for acquiring the information of each space parking space detected at the current moment in the vehicle driving path;
the projection processing unit is used for projecting the space parking space information to a grid map of the vehicle, and forming corresponding rectangular frames under a grid map coordinate system of the vehicle, wherein each rectangular frame comprises coordinates of four vertexes;
and the sequencing unit is used for calculating the distance from the space parking space to the vehicle according to the coordinates of each rectangular frame, and sequencing each space parking space from near to far according to the length of the distance.
Wherein the point cloud information acquisition unit further includes:
the second acquisition unit is used for acquiring all point cloud information in the range of the vehicle grid map by utilizing the millimeter wave radar;
the data mining unit is used for mining and storing point cloud data of the corresponding parking space areas in the grid map according to the acquired coordinates of the rectangular frames corresponding to the parking spaces in each space.
Wherein, the parking stall characteristic acquisition unit further includes:
the first type of characteristic acquisition unit is used for calculating first type of characteristics corresponding to each space parking space according to the point cloud data corresponding to each space parking space, wherein the first type of characteristics comprise the aspect ratio and the inclination ratio of the space parking space;
the preprocessing unit is used for downsampling the point cloud data of each space parking space according to a preset rule, cutting and sequencing the downsampled data, and finally obtaining a two-dimensional point cloud array with the size of 128 x 128;
the second class feature acquisition unit is used for inputting the result of the point cloud data corresponding to each space parking space after being processed by the preprocessing unit into a pre-trained feature extraction network to obtain a plurality of second class features corresponding to each space parking space; the feature extraction network comprises 3 convolution layers, 3 pooling layers and 1 full connection layer, wherein the output layer is a softmax layer, and each second type of output feature is a numerical value between 0 and 1.
Wherein, the candidate parking stall acquisition unit further includes:
the filtering processing unit is used for inputting the first type of characteristics and the second type of characteristics corresponding to all parking space spaces into a particle filter, and generating Gaussian distribution random numbers by using a Monte Carlo method according to the state and the weight of the particle swarm at the previous moment to form the position and the state of a new particle swarm;
the clustering processing unit is used for carrying out KNN clustering processing on the result of the filtering processing unit to obtain the position and the state of the candidate parking space, wherein the position comprises four vertex coordinates of the candidate parking space under a vehicle grid map, and the state represents the confidence level of the candidate parking space;
the feature extracting unit is used for extracting the first type of features and the second type of features corresponding to each candidate parking space obtained by the parking space feature obtaining unit.
Wherein, the parking stall tracking result acquisition unit further includes:
the matching comparison unit is used for carrying out pairwise matching processing on the parking space characteristics corresponding to each candidate parking space and the parking space characteristics of each detected parking space in the parking space tracking result obtained at the previous moment, calculating the sum of variances among the corresponding parking space characteristics, and judging that the matching is successful if the sum is larger than a preset matching threshold;
the tracking result acquisition unit is used for selecting one of the highest matching calculation results as the tracking result of the detected parking spaces if two or more successfully matched candidate parking spaces exist for the same detected parking space in the results of the matching comparison unit; if only one candidate parking space successfully paired exists, directly selecting the candidate parking space as a tracking result of the detected parking space; if the candidate parking spaces which are successfully paired are not available, the detected parking spaces are considered to be unavailable at the current moment, so that parking operation cannot be carried out;
and the display unit is used for obtaining the parking space tracking result at the current moment and displaying the parking space tracking result in the vehicle.
The embodiment of the invention has the following beneficial effects:
the invention discloses a tracking method and a system based on space parking spaces, which are characterized in that position information of each parking space at the current moment in a driving path of a driver is obtained and projected into a 2D grid map of a vehicle, then point cloud information of target frames of each parking space is collected based on millimeter wave radar, and feature extraction is carried out; and generating a certain number of candidate parking space samples in the vehicle grid map based on a preset rule and a particle filtering algorithm, and matching with each parking space detected at the previous moment to finally obtain a tracking result of the corresponding parking space. The invention relates to advanced automatic driving functions such as remote control parking and auxiliary parking systems, which can accurately track the space parking spaces around a vehicle, has higher accuracy, can adapt to the parking space tracking requirements under complex scenes such as no parking space line or serious parking space line damage, and is suitable for most of domestic parking scenes;
according to the embodiment of the invention, the millimeter wave radar is utilized to provide the point cloud information of the parking space, the calculation resources required by the whole tracking algorithm can be reduced through the first type feature extraction based on manual design and the second type feature extraction based on the optimized neural network model, the operation cost of the tracking algorithm is effectively reduced, the parking space candidate sample is generated through the particle filter based on the Monte Carlo method and the KNN clustering algorithm, the accuracy of the tracking result is improved while the parking space matching calculation amount is reduced, so that the requirement of real-time tracking of the parking space is better met, and the driving feeling of a driver is improved. The invention can effectively track the space parking space in a special scene by tracking the parking space in the space parking space searching process.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that it is within the scope of the invention to one skilled in the art to obtain other drawings from these drawings without inventive faculty.
FIG. 1 is a schematic diagram of a main flow of an embodiment of a space-based parking space tracking method according to the present invention;
FIG. 2 is a schematic view of a vehicle grid coordinate system in accordance with the present invention;
FIG. 3 is a schematic structural diagram of an embodiment of a space-based parking space tracking system according to the present invention;
fig. 4 is a schematic structural diagram of the space parking space information acquiring unit in fig. 3;
fig. 5 is a schematic structural diagram of the cloud information obtaining unit in fig. 3;
FIG. 6 is a schematic structural diagram of the parking space feature acquiring unit in FIG. 3;
fig. 7 is a schematic structural diagram of the candidate parking space acquiring unit in fig. 3;
fig. 8 is a schematic structural diagram of the parking space tracking result obtaining unit in fig. 3.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
Fig. 1 is a schematic diagram of main flow of an embodiment of a space-based parking space tracking method according to the present invention. As shown in fig. 2, in this embodiment, the method includes the following steps:
step S10, acquiring information of each space parking space detected at the current moment in a vehicle driving path, and projecting the information into a grid map of the vehicle;
in a specific example, the step S10 further includes:
acquiring information of each space parking space detected at the current moment in a vehicle driving path; the source of the space parking space information can be a parking space detection result obtained by a camera or other sensors and obtained by a related target detection algorithm, or a tracking result obtained by calculation based on the parking space characteristics at the previous moment, or certain parking spaces selected by a driver in a human-vehicle interaction system;
projecting the space parking space information to a grid map of the vehicle, and forming corresponding rectangular frames under a grid map coordinate system of the vehicle, wherein each rectangular frame comprises coordinates of four vertexes;
and calculating the distance from the space parking space to the vehicle according to the coordinates of each rectangular frame, and sequencing each space parking space from near to far according to the length of the distance.
Step S11, according to the acquired space parking space information, acquiring point cloud information around each space parking space;
in a specific example, the step S11 further includes:
acquiring all point cloud information in a vehicle grid map range by utilizing a millimeter wave radar;
and according to the obtained coordinates of the rectangular frame corresponding to each space parking space, the point cloud data of the corresponding parking space area in the grid map are mined out and stored.
Step S12, based on point cloud information of each space parking space, acquiring parking space characteristics corresponding to each space parking space, wherein the parking space characteristics at least comprise first type characteristics and second type characteristics; it will be appreciated that in embodiments of the invention, the first type of feature may be an explicit feature; the second type of feature may be an implicit feature.
In a specific example, the step S12 further includes:
according to point cloud data corresponding to each space parking space, calculating first type features (namely dominant features) corresponding to each space parking space, wherein the first type features are manually designed features, in the embodiment, two features are included, and the aspect ratio and the gradient ratio of the space parking space are included; specifically, as shown in fig. 2, in the vehicle grid coordinate system, the origin of coordinates is the center of the vehicle rear axle, the X-axis is the direction of motion of the vehicle, and the Y-axis is the direction perpendicular to the direction of motion of the vehicle. The length-width ratio is obtained by calculating the length and the width of the space parking space; the inclination ratio refers to the inclination degree of the space parking space, and specifically is an included angle between the connection line of the parking space vertexes P1 and P2 and the connection line of the parking space vertexes P2 and P3;
inputting point cloud data corresponding to each space parking space into a pre-trained feature extraction network to obtain a plurality of second type features (i.e. hidden features) corresponding to each space parking space; the feature extraction network comprises 3 convolution layers, 3 pooling layers and 1 full connection layer, wherein the output layer is a softmax layer, and each second type of output feature is a numerical value between 0 and 1. It can be understood that the second type of features are depth features learned by a large number of training samples, and in this embodiment, 5 second type of features are output;
before the point cloud data corresponding to each space parking space is input to a pre-trained feature extraction network, the method further comprises the following steps:
down-sampling the point cloud data of each space parking space according to a preset rule;
and cutting and sequencing the downsampled data obtained in the previous step to finally obtain a two-dimensional point cloud array with the size of 128 x 128, wherein the two-dimensional point cloud array is used as the input of the feature extraction network.
More specifically, in the present embodiment, the following table lists the network structure and parameters of a feature extraction network used:
table one parameter map of feature extraction network
Network layer Input dimension Convolution kernel size Step size Output dimension
First convolution layer (CONV 1) 128*128 7*7 1 122*122
First pooling layer (POOL 1) 122*122 2*2 2 64*64
Second convolution layer (CONV 2) 64*64 3*3 1 62*62
Second pooling layer (POOL 2) 62*62 2*2 2 32*32
Third convolution layer (CONV 3) 32*32 1*1 1 32*32
Third pooling layer (POOL 4) 32*32 2*2 2 16*16
Full tie layer (FC 1) 16*16 1*1 1 256*1
Output layer (Softmax) 256*1 \ \ 5*1
Step S13, generating at least one candidate parking place in the vehicle grid map through particle filtering processing according to the parking place characteristics corresponding to each space parking place;
in a specific example, the step S13 further includes:
inputting each first type of features (2) and each second type of features (5) corresponding to all parking space spaces into a particle filter, generating Gaussian distribution random numbers by using a Monte Carlo method according to the state and weight of a previous moment particle swarm, forming the position and state of a new particle swarm, and carrying out KNN clustering treatment to obtain the position and state of a candidate parking space, wherein the position comprises four vertex coordinates of the candidate parking space under a vehicle grid map, and the state represents the confidence level of the candidate parking space;
and acquiring the first type of features and the second type of features corresponding to each candidate parking space.
It will be appreciated that in embodiments of the present invention, the particle filtering algorithm is a sequential Bayesian inference method that infers the implicit state of the object in a recursive manner. The process of generating candidate carport samples generally requires initializing a particle filter, including the number of particles and the weight of each particle. Considering the characteristics of a parking scene and the characteristics of a millimeter wave radar sensor, the particle number at each moment is fixed to be 100, and the method for initializing the weight adopts the initialization of a xavier Gaussian; then, updating the particle weight, namely generating Gaussian distribution random numbers by using a Monte Carlo method according to the state and the weight of the particle swarm at the previous moment, and further generating the position and the state of a new particle swarm; and finally, generating a candidate parking space sample based on the particle swarm, namely, carrying out KNN clustering on the newly generated particle swarm, and further updating to obtain the positions and states of a plurality of particle swarms, namely, the positions and states of the candidate parking space, wherein the positions comprise 4 vertex coordinates (shown in fig. 2) of the candidate parking space under a vehicle grid map, and the states represent the confidence level of the candidate parking space.
And S14, carrying out feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result obtained at the previous moment, and obtaining the parking space tracking result at the current moment according to the matching result.
In a specific example, the step S14 further includes:
matching the parking space characteristics corresponding to each candidate parking space with the parking space characteristics of each detected parking space in the parking space tracking result obtained at the previous moment in pairs, calculating the sum of variances among the corresponding parking space characteristics, and judging that the matching is successful if the sum is larger than a preset matching threshold value, wherein the matching threshold value is obtained by pre-calibration;
for the same detected parking space, if two or more candidate parking spaces successfully paired exist, selecting the highest matching calculation result as a tracking result of the detected parking space; if only one candidate parking space successfully paired exists, directly selecting the candidate parking space as a tracking result of the detected parking space; if the candidate parking spaces which are successfully paired are not available, the detected parking spaces are considered to be unavailable at the current moment, so that parking operation cannot be carried out;
and obtaining a parking space tracking result at the current moment and displaying the parking space tracking result in the vehicle.
FIG. 3 is a schematic diagram illustrating an embodiment of a space-based parking spot tracking system according to the present invention; as shown in fig. 4 to 8, in this embodiment, the space-based parking space tracking system 1 at least includes:
the space parking space information acquisition unit 10 is used for acquiring each piece of space parking space information detected at the current moment in the vehicle driving path and projecting the information into a grid map of the vehicle;
a point cloud information acquiring unit 11, configured to acquire point cloud information around each space parking space according to the acquired space parking space information;
the parking space feature obtaining unit 12 is configured to obtain a parking space feature corresponding to each space parking space based on point cloud information of each space parking space, where the parking space feature at least includes a first type feature and a second type feature;
the candidate parking space obtaining unit 3 is used for generating at least one candidate parking space in the vehicle grid map according to the parking space characteristics corresponding to each space parking space;
the parking space tracking result obtaining unit 14 is configured to perform feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result obtained at the previous moment, and obtain a parking space tracking result at the current moment according to the matching result.
As shown in fig. 4, in a specific example, the space parking space information obtaining unit 10 further includes:
a first obtaining unit 100, configured to obtain information of each space parking space detected at a current moment in a vehicle driving path;
a projection processing unit 101, configured to project each space parking space information to a grid map of the vehicle, and form corresponding rectangular frames under a grid map coordinate system of the vehicle, where each rectangular frame includes coordinates of four vertices;
the sorting unit 102 is configured to calculate a distance from the space parking space to the vehicle according to coordinates of each rectangular frame, and sort each space parking space from near to far according to a length of the distance.
As shown in fig. 5, in a specific example, the point cloud information obtaining unit 11 further includes:
a second acquisition unit 110 for acquiring all point cloud information within a vehicle grid map range using a millimeter wave radar;
the data mining unit 111 is configured to mine and store point cloud data of a parking space area corresponding to the grid map according to the obtained coordinates of the rectangular frame corresponding to each space parking space.
As shown in fig. 6, in a specific example, the parking space feature obtaining unit 12 further includes:
the first type feature obtaining unit 120 is configured to calculate first type features corresponding to each space parking space according to point cloud data corresponding to each space parking space, where the first type features include aspect ratio and slope ratio of the space parking space;
the preprocessing unit 121 is configured to downsample the point cloud data of each space parking space according to a predetermined rule, and clip and sort the downsampled data, so as to finally obtain a two-dimensional point cloud array with a size of 128×128;
the second class feature obtaining unit 122 is configured to input a result of the point cloud data corresponding to each space parking space after being processed by the preprocessing unit to a feature extraction network trained in advance, so as to obtain a plurality of second class features corresponding to each space parking space; the feature extraction network comprises 3 convolution layers, 3 pooling layers and 1 full connection layer, wherein the output layer is a softmax layer, and each second type of output feature is a numerical value between 0 and 1.
As shown in fig. 7, in a specific example, the candidate parking space obtaining unit 13 further includes:
the filtering processing unit 130 is configured to input each first type of feature and each second type of feature corresponding to all parking space spaces into a particle filter, and generate a gaussian distribution random number by using a monte carlo method according to the state and the weight of the particle swarm at the previous time to form a new position and state of the particle swarm;
the clustering unit 131 is configured to perform KNN clustering on the result of the filtering unit to obtain a position and a state of the candidate parking space, where the position includes four vertex coordinates of the candidate parking space under the vehicle grid map, and the state indicates a confidence level of the candidate parking space;
the feature extracting unit 132 is configured to extract the first type of feature and the second type of feature corresponding to each candidate parking space obtained by the parking space feature obtaining unit.
As shown in fig. 8, in a specific example, the parking space tracking result obtaining unit 14 further includes:
the matching comparison unit 140 is configured to perform a pairwise matching process on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result obtained at the previous time, calculate a sum of variances between the corresponding parking space features, and determine that the matching is successful if the sum is greater than a predetermined matching threshold;
a tracking result obtaining unit 141, configured to, in the results of the matching comparison unit, select, for a same detected parking space, if there are two or more candidate parking spaces successfully paired, the one with the highest matching calculation result as the tracking result of the detected parking space; if only one candidate parking space successfully paired exists, directly selecting the candidate parking space as a tracking result of the detected parking space; if the candidate parking spaces which are successfully paired are not available, the detected parking spaces are considered to be unavailable at the current moment, so that parking operation cannot be carried out;
and the display unit 142 is configured to obtain a parking space tracking result at the current moment and display the tracking result in the vehicle.
For more details reference is made to the previous description of fig. 1 to 2, and no trace back is made here.
The embodiment of the invention has the following beneficial effects:
the invention discloses a tracking method and a system based on space parking spaces, which are characterized in that position information of each parking space at the current moment in a driving path of a driver is obtained and projected into a 2D grid map of a vehicle, then point cloud information of target frames of each parking space is collected based on millimeter wave radar, and feature extraction is carried out; and generating a certain number of candidate parking space samples in the vehicle grid map based on a preset rule and a particle filtering algorithm, and matching with each parking space detected at the previous moment to finally obtain a tracking result of the corresponding parking space. The invention relates to advanced automatic driving functions such as remote control parking and auxiliary parking systems, which can accurately track the space parking spaces around a vehicle, has higher accuracy, can adapt to the parking space tracking requirements under complex scenes such as no parking space line or serious parking space line damage, and is suitable for most of domestic parking scenes;
according to the embodiment of the invention, the millimeter wave radar is utilized to provide the point cloud information of the parking space, the calculation resources required by the whole tracking algorithm can be reduced through the first type feature extraction based on manual design and the second type feature extraction based on the optimized neural network model, the operation cost of the tracking algorithm is effectively reduced, the parking space candidate sample is generated through the particle filter based on the Monte Carlo method and the KNN clustering algorithm, the accuracy of the tracking result is improved while the parking space matching calculation amount is reduced, so that the requirement of real-time tracking of the parking space is better met, and the driving feeling of a driver is improved. The invention can effectively track the space parking space in a special scene by tracking the parking space in the space parking space searching process.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above disclosure is only a preferred embodiment of the present invention, and it is needless to say that the scope of the invention is not limited thereto, and therefore, the equivalent changes according to the claims of the present invention still fall within the scope of the present invention.

Claims (11)

1. The tracking method based on the space parking spaces is characterized by comprising the following steps:
step S10, acquiring information of each space parking space detected at the current moment in a vehicle driving path, and projecting the information into a grid map of the vehicle;
step S11, according to the acquired space parking space information, point cloud information around each space parking space is acquired;
step S12, based on point cloud information of each space parking space, acquiring parking space characteristics corresponding to each space parking space, wherein the parking space characteristics at least comprise first type characteristics and second type characteristics;
step S13, generating at least one candidate parking place in the vehicle grid map according to the parking place characteristics corresponding to each space parking place;
step S14, carrying out feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking results obtained at the previous moment, and obtaining the parking space tracking results at the current moment according to the matching results;
wherein, the step S12 further includes:
calculating first type features corresponding to each space parking space according to point cloud data corresponding to each space parking space, wherein the first type features comprise aspect ratio and gradient ratio of the space parking space; the length-width ratio is obtained by calculating the length and the width of the space parking space; the inclination ratio refers to the inclination degree of the space parking spaces;
inputting point cloud data corresponding to each space parking space into a pre-trained feature extraction network to obtain a plurality of second class features corresponding to each space parking space, wherein the second class features are depth features learned through a large number of training samples;
the step S13 further includes:
inputting each first type of feature and each second type of feature corresponding to all parking space spaces into a particle filter, generating Gaussian distribution random numbers by using a Monte Carlo method according to the state and the weight of the previous moment particle swarm, forming the position and the state of a new particle swarm, and carrying out KNN clustering treatment to obtain the position and the state of a candidate parking space, wherein the position comprises four vertex coordinates of the candidate parking space under a vehicle grid map, and the state represents the confidence level of the candidate parking space;
and acquiring the first type of features and the second type of features corresponding to each candidate parking space.
2. The space-based parking space tracking method as set forth in claim 1, wherein the step S10 further includes:
acquiring information of each space parking space detected at the current moment in a vehicle driving path;
projecting the space parking space information to a grid map of the vehicle, and forming corresponding rectangular frames under a grid map coordinate system of the vehicle, wherein each rectangular frame comprises coordinates of four vertexes;
and calculating the distance from the space parking space to the vehicle according to the coordinates of each rectangular frame, and sequencing each space parking space from near to far according to the length of the distance.
3. The space-based parking space tracking method according to claim 2, wherein the step S11 further comprises:
acquiring all point cloud information in a vehicle grid map range by utilizing a millimeter wave radar;
and according to the obtained coordinates of the rectangular frame corresponding to each space parking space, the point cloud data of the corresponding parking space area in the grid map are mined out and stored.
4. The method for tracking a parking space according to claim 3, wherein in the step S12,
the feature extraction network comprises 3 convolution layers, 3 pooling layers and 1 full connection layer, wherein the output layer is a softmax layer, and each second type of output feature is a numerical value between 0 and 1.
5. The space-based parking space tracking method of claim 4, further comprising, prior to inputting the point cloud data corresponding to each space parking space into a pre-trained feature extraction network:
down-sampling the point cloud data of each space parking space according to a preset rule;
and cutting and sequencing the downsampled data obtained in the previous step to finally obtain a two-dimensional point cloud array with the size of 128 x 128, wherein the two-dimensional point cloud array is used as the input of the feature extraction network.
6. The space-based parking space tracking method according to claim 5, wherein the step S14 further comprises:
carrying out pairwise matching processing on the parking space characteristics corresponding to each candidate parking space and the parking space characteristics of each detected parking space in the parking space tracking result obtained at the previous moment, calculating the sum of variances among the corresponding parking space characteristics, and judging that the matching is successful if the sum is larger than a preset matching threshold;
for the same detected parking space, if two or more candidate parking spaces successfully paired exist, selecting the highest matching calculation result as a tracking result of the detected parking space; if only one candidate parking space successfully paired exists, directly selecting the candidate parking space as a tracking result of the detected parking space; if the candidate parking spaces which are successfully paired are not available, the detected parking spaces are considered to be unavailable at the current moment, so that parking operation cannot be carried out;
and obtaining a parking space tracking result at the current moment and displaying the parking space tracking result in the vehicle.
7. A space-based parking space tracking system, comprising:
the space parking space information acquisition unit is used for acquiring the information of each space parking space detected at the current moment in the vehicle driving path and projecting the information into a grid map of the vehicle;
the point cloud information acquisition unit is used for acquiring point cloud information around each space parking space according to the acquired space parking space information;
the parking space feature acquisition unit is used for acquiring parking space features corresponding to each space parking space based on point cloud information of each space parking space, and the parking space features at least comprise first type features and second type features;
the candidate parking space acquisition unit is used for generating at least one candidate parking space in the vehicle grid map according to the parking space characteristics corresponding to each space parking space;
the parking space tracking result acquisition unit is used for carrying out feature matching processing on the parking space features corresponding to each candidate parking space and the parking space features of each detected parking space in the parking space tracking result acquired at the previous moment, and acquiring a parking space tracking result at the current moment according to the matching result;
the parking space feature acquisition unit further includes:
the first type of characteristic acquisition unit is used for calculating first type of characteristics corresponding to each space parking space according to the point cloud data corresponding to each space parking space, wherein the first type of characteristics comprise the aspect ratio and the inclination ratio of the space parking space; the length-width ratio is obtained by calculating the length and the width of the space parking space; the inclination ratio refers to the inclination degree of the space parking spaces;
the second type feature acquisition unit is used for inputting the result of the point cloud data corresponding to each space parking space after being processed by the preprocessing unit into a pre-trained feature extraction network to obtain a plurality of second type features corresponding to each space parking space, wherein the second type features are depth features learned through a large number of training samples;
the candidate parking space acquisition unit further includes:
the filtering processing unit is used for inputting the first type of characteristics and the second type of characteristics corresponding to all parking space spaces into a particle filter, and generating Gaussian distribution random numbers by using a Monte Carlo method according to the state and the weight of the particle swarm at the previous moment to form the position and the state of a new particle swarm;
the clustering processing unit is used for carrying out KNN clustering processing on the result of the filtering processing unit to obtain the position and the state of the candidate parking space, wherein the position comprises four vertex coordinates of the candidate parking space under a vehicle grid map, and the state represents the confidence level of the candidate parking space;
the feature extracting unit is used for extracting the first type of features and the second type of features corresponding to each candidate parking space obtained by the parking space feature obtaining unit.
8. The space-based parking space tracking system of claim 7, wherein the space-parking space information acquisition unit further comprises:
the first acquisition unit is used for acquiring the information of each space parking space detected at the current moment in the vehicle driving path;
the projection processing unit is used for projecting the space parking space information to a grid map of the vehicle, and forming corresponding rectangular frames under a grid map coordinate system of the vehicle, wherein each rectangular frame comprises coordinates of four vertexes;
and the sequencing unit is used for calculating the distance from the space parking space to the vehicle according to the coordinates of each rectangular frame, and sequencing each space parking space from near to far according to the length of the distance.
9. The space-based parking spot tracking system of claim 8, wherein the point cloud information acquisition unit further comprises:
the second acquisition unit is used for acquiring all point cloud information in the range of the vehicle grid map by utilizing the millimeter wave radar;
the data mining unit is used for mining and storing point cloud data of the corresponding parking space areas in the grid map according to the acquired coordinates of the rectangular frames corresponding to the parking spaces in each space.
10. The space-based parking space tracking system of claim 9, wherein the parking space feature acquisition unit further comprises:
the preprocessing unit is used for downsampling the point cloud data of each space parking space according to a preset rule, cutting and sequencing the downsampled data, and finally obtaining a two-dimensional point cloud array with the size of 128 x 128;
and in the second type of feature acquisition unit, the feature extraction network comprises 3 convolution layers, 3 pooling layers and 1 full connection layer, the output layer is a softmax layer, and each second type of feature output is a numerical value between 0 and 1.
11. The space-based parking space tracking system of claim 10, wherein the parking space tracking result obtaining unit further comprises:
the matching comparison unit is used for carrying out pairwise matching processing on the parking space characteristics corresponding to each candidate parking space and the parking space characteristics of each detected parking space in the parking space tracking result obtained at the previous moment, calculating the sum of variances among the corresponding parking space characteristics, and judging that the matching is successful if the sum is larger than a preset matching threshold;
the tracking result acquisition unit is used for selecting one of the highest matching calculation results as the tracking result of the detected parking spaces if two or more successfully matched candidate parking spaces exist for the same detected parking space in the results of the matching comparison unit; if only one candidate parking space successfully paired exists, directly selecting the candidate parking space as a tracking result of the detected parking space; if the candidate parking spaces which are successfully paired are not available, the detected parking spaces are considered to be unavailable at the current moment, so that parking operation cannot be carried out;
and the display unit is used for obtaining the parking space tracking result at the current moment and displaying the parking space tracking result in the vehicle.
CN202210379055.2A 2022-04-12 2022-04-12 Tracking method and system based on space parking space Active CN115222767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210379055.2A CN115222767B (en) 2022-04-12 2022-04-12 Tracking method and system based on space parking space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210379055.2A CN115222767B (en) 2022-04-12 2022-04-12 Tracking method and system based on space parking space

Publications (2)

Publication Number Publication Date
CN115222767A CN115222767A (en) 2022-10-21
CN115222767B true CN115222767B (en) 2024-01-23

Family

ID=83605916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210379055.2A Active CN115222767B (en) 2022-04-12 2022-04-12 Tracking method and system based on space parking space

Country Status (1)

Country Link
CN (1) CN115222767B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152258B (en) * 2023-11-01 2024-01-30 中国电建集团山东电力管道工程有限公司 Product positioning method and system for intelligent workshop of pipeline production

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE0004096D0 (en) * 2000-11-08 2000-11-08 Nira Automotive Ab Positioning system
CN109031346A (en) * 2018-07-09 2018-12-18 江苏大学 A kind of periphery parking position aided detection method based on 3D laser radar
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN110888125A (en) * 2019-12-05 2020-03-17 奥特酷智能科技(南京)有限公司 Automatic driving vehicle positioning method based on millimeter wave radar
CN111272165A (en) * 2020-02-27 2020-06-12 清华大学 Intelligent vehicle positioning method based on characteristic point calibration
CN111814773A (en) * 2020-09-07 2020-10-23 广州汽车集团股份有限公司 Lineation parking space identification method and system
WO2021004077A1 (en) * 2019-07-09 2021-01-14 华为技术有限公司 Method and apparatus for detecting blind areas of vehicle
CN112612862A (en) * 2020-12-24 2021-04-06 哈尔滨工业大学芜湖机器人产业技术研究院 Grid map positioning method based on point cloud registration
CN113076824A (en) * 2021-03-19 2021-07-06 上海欧菲智能车联科技有限公司 Parking space acquisition method and device, vehicle-mounted terminal and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10270642B2 (en) * 2012-12-05 2019-04-23 Origin Wireless, Inc. Method, apparatus, and system for object tracking and navigation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE0004096D0 (en) * 2000-11-08 2000-11-08 Nira Automotive Ab Positioning system
CN109031346A (en) * 2018-07-09 2018-12-18 江苏大学 A kind of periphery parking position aided detection method based on 3D laser radar
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
WO2021004077A1 (en) * 2019-07-09 2021-01-14 华为技术有限公司 Method and apparatus for detecting blind areas of vehicle
CN110888125A (en) * 2019-12-05 2020-03-17 奥特酷智能科技(南京)有限公司 Automatic driving vehicle positioning method based on millimeter wave radar
CN111272165A (en) * 2020-02-27 2020-06-12 清华大学 Intelligent vehicle positioning method based on characteristic point calibration
CN111814773A (en) * 2020-09-07 2020-10-23 广州汽车集团股份有限公司 Lineation parking space identification method and system
CN112612862A (en) * 2020-12-24 2021-04-06 哈尔滨工业大学芜湖机器人产业技术研究院 Grid map positioning method based on point cloud registration
CN113076824A (en) * 2021-03-19 2021-07-06 上海欧菲智能车联科技有限公司 Parking space acquisition method and device, vehicle-mounted terminal and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Time-of-Flight Camera Based Indoor Parking Localization Leveraging Manhattan World Regulation";Hengwang Zhao et.al.;《2020 IEEE Intelligent Vehicles Symposium (IV)》;全文 *
"基于激光Slam的仓储搬运AGV定位技术研究";郝岩;《中国优秀硕士学位论文全文数据库 经济与管理科学辑》;第2018年卷(第12期);第J157-492页 *
"自主泊车场景的无人车辆定位与导航建图算法研究";唐娜;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;第2022年卷(第01期);第C035-398页 *
基于激光SLAM和深度学习的语义地图构建;何松;孙静;郭乐江;陈梁;;计算机技术与发展(第09期);第94-100页 *

Also Published As

Publication number Publication date
CN115222767A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
CN110879994A (en) Three-dimensional visual inspection detection method, system and device based on shape attention mechanism
CN110222626B (en) Unmanned scene point cloud target labeling method based on deep learning algorithm
CN108629231B (en) Obstacle detection method, apparatus, device and storage medium
CN111429514A (en) Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN111738995B (en) RGBD image-based target detection method and device and computer equipment
CN109919145B (en) Mine card detection method and system based on 3D point cloud deep learning
CN111311663B (en) Real-time large-scene three-dimensional semantic modeling method
CN113705655B (en) Three-dimensional point cloud full-automatic classification method and deep neural network model
CN115222767B (en) Tracking method and system based on space parking space
CN114359876B (en) Vehicle target identification method and storage medium
Roberts Attentive visual tracking and trajectory estimation for dynamic scene segmentation
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
CN112330815A (en) Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
Wen et al. Research on 3D point cloud de-distortion algorithm and its application on Euclidean clustering
Chen et al. Improved fast r-cnn with fusion of optical and 3d data for robust palm tree detection in high resolution uav images
Fehr et al. Reshaping our model of the world over time
CN115620263B (en) Intelligent vehicle obstacle detection method based on image fusion of camera and laser radar
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
Jung et al. Fast point clouds upsampling with uncertainty quantification for autonomous vehicles
CN114913519B (en) 3D target detection method and device, electronic equipment and storage medium
CN116052099A (en) Small target detection method for unstructured road
CN115713633A (en) Visual SLAM method, system and storage medium based on deep learning in dynamic scene
CN115937825A (en) Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant