CN116778458B - Parking space detection model construction method, parking space detection method, equipment and storage medium - Google Patents

Parking space detection model construction method, parking space detection method, equipment and storage medium Download PDF

Info

Publication number
CN116778458B
CN116778458B CN202311063234.6A CN202311063234A CN116778458B CN 116778458 B CN116778458 B CN 116778458B CN 202311063234 A CN202311063234 A CN 202311063234A CN 116778458 B CN116778458 B CN 116778458B
Authority
CN
China
Prior art keywords
parking space
sample
predicted
preset
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311063234.6A
Other languages
Chinese (zh)
Other versions
CN116778458A (en
Inventor
彭欣
刘晓锋
张如高
虞正华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Moshi Intelligent Technology Co ltd
Original Assignee
Suzhou Moshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Moshi Intelligent Technology Co ltd filed Critical Suzhou Moshi Intelligent Technology Co ltd
Priority to CN202311063234.6A priority Critical patent/CN116778458B/en
Publication of CN116778458A publication Critical patent/CN116778458A/en
Application granted granted Critical
Publication of CN116778458B publication Critical patent/CN116778458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a parking space detection model construction method, a parking space detection device and a storage medium, comprising the following steps: acquiring a sample vehicle site cloud and a corresponding sample parking space; performing coordinate transformation based on the sample vehicle site cloud and the sample parking space to obtain a sample point cloud image with a preset image size and sample parking space information of the sample parking space; dividing a sample point cloud image into a plurality of sample grid images; inputting each sample grid image into a preset parking space detection model, and returning the sample grid images to the predicted parking spaces of the preset parking space number to obtain predicted parking space information of the predicted parking spaces in each sample grid image; and matching the sample parking spaces and the predicted parking spaces which are positioned in the same sample grid image, and updating parameters of a preset parking space detection model based on a matching result, sample parking space information and predicted parking space information to obtain a target parking space detection model so as to detect the parking spaces based on the target parking space detection model. The method and the device can solve the problem of long time consumption of parking space detection.

Description

Parking space detection model construction method, parking space detection method, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a parking space detection model construction method, a parking space detection method, computer equipment and a storage medium.
Background
Along with development of science and technology, automobile intellectualization is a trend, and an automatic parking system is used as an intelligent auxiliary driving system and is one of necessary functions for intelligently driving an automobile. Parking space identification is an important sensing link in an automatic parking system, and has a crucial influence on parking accuracy. In the related art, a visual camera and an ultrasonic sensor are generally used for parking space recognition. Wherein, the ultrasonic sensor is not influenced by the ambient light, and can be well complemented with the vision camera. However, the detection result of the ultrasound is generally affected by the vehicle speed, and there are cases of false detection, more detection and less detection, so that the parking space detection based on the ultrasound generally needs to calculate the ultrasound point cloud according to the ultrasound data, then cluster according to the ultrasound point cloud to obtain an obstacle, and further judge through various complicated scenes to detect the parking space capable of parking. However, the method has long time consumption for clustering the ultrasonic point cloud and judging the complicated scene, so that the time consumption for parking space detection is long.
Disclosure of Invention
In view of the above, the invention provides a parking space detection model construction method, a parking space detection device and a storage medium, so as to solve the problem of long time consumption of parking space detection.
In a first aspect, the present invention provides a parking space detection model construction method, where the parking space detection model construction method includes:
acquiring sample vehicle site clouds and corresponding sample parking spaces;
performing coordinate transformation based on the sample vehicle site cloud and each sample vehicle position to obtain a sample point cloud image with a preset image size and sample vehicle position information of each sample vehicle position;
dividing the sample point cloud image into a plurality of sample grid images according to a preset dividing size;
inputting each sample grid image into a preset parking space detection model, and returning the predicted parking spaces of the preset parking space number to each sample grid image to obtain predicted parking space information of the predicted parking spaces in each sample grid image, wherein the preset parking space number corresponds to the preset dividing size;
and carrying out parking space matching on the sample parking spaces and the predicted parking spaces which are positioned in the same sample grid image, and carrying out parameter updating on a preset parking space detection model based on a parking space matching result, sample parking space information and the predicted parking space information to obtain a target parking space detection model so as to carry out parking space detection based on the target parking space detection model.
In the mode, sample point cloud images of sample vehicle point clouds are converted into sample point cloud images with preset image sizes, sample parking space information of corresponding sample parking spaces on the sample point cloud images is obtained, then the sample point cloud images are divided into a plurality of sample grid images according to preset division sizes, and the sample grid images are returned to a preset number of predicted parking spaces through a preset parking space detection model, so that predicted parking space information of the predicted parking spaces in each sample grid image is obtained, the sizes of input data and output data of the preset parking space detection model can be limited, and therefore the preset parking space model can be effectively trained to obtain a target parking space detection model. Therefore, in the subsequent parking space detection process, only the acquired target vehicle site cloud is converted into a target point cloud image with a preset image size, the parking space detection can be performed based on the target parking space detection model and the target point cloud image, and the time consumption of the parking space detection can be reduced without performing point cloud clustering and complicated scene judgment.
In an optional implementation manner, the performing a parking space matching on the sample parking space and the predicted parking space in the same sample grid image, and performing parameter updating on a preset parking space detection model based on a parking space matching result, the sample parking space information and the predicted parking space information to obtain a target parking space detection model, and the method includes:
Carrying out parking space matching on the sample parking spaces positioned in the same sample grid image and the predicted parking spaces to obtain a parking space matching result;
processing the sample parking space information and the predicted parking space information based on the parking space matching result to obtain model loss;
and updating parameters of a preset parking space detection model based on the model loss to obtain a target parking space detection model.
In the mode, firstly, carrying out parking space matching on sample parking spaces and predicted parking spaces which are positioned in the same sample grid image to obtain a parking space matching result; and processing the sample parking space information and the predicted parking space information based on the parking space matching result to obtain model loss, so that parameter updating is performed on a preset parking space detection model based on the model loss, and therefore, the detection precision of the preset parking space detection model can be improved, and the accuracy of subsequent parking space detection is improved.
In an alternative embodiment, the sample carport information includes sample location information, sample confidence and sample probabilities corresponding to a plurality of carport types for the sample carport; the predicted parking space information comprises predicted position information of the predicted parking space, predicted confidence and predicted probabilities corresponding to the plurality of parking space types; based on the parking space matching result, sample parking space information of the sample parking space and predicted parking space information of the predicted parking space are processed to obtain model loss, and the method comprises the following steps:
Processing the sample position information and the predicted position information based on the parking space matching result to obtain coordinate loss;
processing the sample confidence and the prediction confidence based on the parking space matching result to obtain confidence loss;
processing the sample probability and the prediction probability based on the parking space matching result to obtain classification loss;
determining a model loss based on the coordinate loss, the confidence loss, and the classification loss.
In the method, the coordinate loss, the confidence loss and the classification loss are calculated respectively based on the parking space matching result, the sample position information, the predicted position information, the sample confidence, the predicted confidence, the sample probability and the predicted probability, and the model loss is determined based on the coordinate loss, the confidence loss and the classification loss, so that the accuracy of the model loss can be ensured, and the accuracy of the subsequent parameter updating of the preset parking space detection model can be improved.
In an alternative embodiment, the sample position information includes a sample center coordinate, a sample length, and a sample width; the predicted position information comprises a predicted center coordinate, a predicted length and a predicted width; the coordinate loss is calculated according to the following formula:
Wherein,for coordinate loss, ++>For the co-ordinate coefficient>B is the preset number of images of the sample grid image, and +.>For judging whether the jth predicted parking place in the ith sample grid image has the first place sign of the matched sample parking place, the method comprises the following steps of (a)>For the predicted center coordinates of the jth predicted parking place in the ith sample grid image in the x-axis direction,/th sample grid image>For the sample center coordinates of the sample parking space in the x-axis direction, which are matched with the jth predicted parking space in the ith sample grid image,/a>For the predicted center coordinates of the jth predicted parking place in the ith sample grid image in the y-axis direction,/the sample grid image is a target>For the sample center coordinates of the sample parking space in the y-axis direction, which are matched with the jth predicted parking space in the ith sample grid image,/a>For the predicted width of the jth predicted parking space in the ith sample grid image, +.>For the sample width of the sample parking space matching the jth predicted parking space in the ith sample grid image, +.>For the predicted height of the jth predicted parking spot in the ith sample grid image,/th sample grid image>And the sample height of the sample parking space matched with the j-th predicted parking space in the i-th sample grid image.
In an alternative embodiment, the confidence loss is calculated according to the following formula:
wherein,for confidence loss, ++>B is the preset number of images of the sample grid image, and +.>For judging whether a first vehicle position mark of the matched sample vehicle position exists in the jth predicted vehicle position in the ith sample grid image, +.>For confidence coefficient, ++>For judging whether a second vehicle position mark of the matched sample vehicle position does not exist in the jth predicted vehicle position in the ith sample grid image, +.>For the prediction confidence of the jth predicted parking place in the ith sample grid image, +.>And (3) the sample confidence of the sample parking space matched with the j-th predicted parking space in the i-th sample grid image.
In an alternative embodiment, the classification loss is calculated according to the following formula:
wherein,for classifying loss->For the number of images of the sample grid image, B is the preset number,for the first vehicle position mark for judging whether the matched sample vehicle position exists in the jth predicted vehicle position in the ith sample grid image,/a vehicle position detection device is used for detecting whether the matched sample vehicle position exists in the jth predicted vehicle position in the ith sample grid image>The jth predicted parking space in the ith sample grid image is the parking space type +. >Is used to determine the prediction probability of (1),the sample parking space matched with the jth predicted parking space in the ith sample grid image is the parking space type +.>Is the sample probability of the plurality of parking space types.
In an optional implementation manner, the coordinate transformation is performed based on the sample vehicle site cloud and each sample parking space to obtain a sample point cloud image with a preset image size and sample parking space information of each sample parking space, including:
acquiring a point cloud range of the sample vehicle point cloud;
and converting the sample vehicle site cloud and each sample parking space into the preset image coordinate system based on the point cloud range, the preset image size and the corresponding relation between the point cloud coordinate system corresponding to the sample vehicle site cloud and the preset image coordinate system so as to obtain a sample point cloud image with the preset image size and sample parking space information of each sample parking space.
In a second aspect, the present invention provides a parking space detection method, which is characterized in that the parking space detection method includes:
acquiring a target vehicle site cloud in a preset range of a target vehicle;
performing coordinate transformation based on the target vehicle locus cloud to obtain a target point cloud image with a preset image size;
Dividing the target point cloud image according to a preset dividing size to obtain a plurality of target grid images;
inputting each target grid image into a target parking space detection model, and returning the preset number of the pending parking spaces to each target grid image to obtain the information of the pending parking spaces in each target grid image, wherein the preset number of the pending parking spaces corresponds to the preset dividing size, and the target parking space detection model is obtained based on the parking space detection model construction method of the first aspect or any implementation mode corresponding to the first aspect;
and determining a target parking space and target parking space information from each undetermined parking space and the corresponding undetermined parking space information.
In a third aspect, the present invention provides a computer device comprising: the parking space detection model building method and/or the parking space detection method in any of the above embodiments are executed by the processor through executing the computer instructions.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which computer instructions are stored, the computer instructions being configured to cause a computer to execute the parking space detection model construction method and/or the parking space detection method according to any one of the above embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a parking space detection model construction method according to an embodiment of the invention;
FIG. 2 is a schematic illustration of a sample vehicle site cloud according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a sample point cloud image according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for constructing a parking space detection model according to an embodiment of the invention;
FIG. 5 is a schematic flow chart of a parking space detection method according to an embodiment of the invention;
FIG. 6 is a block diagram of a parking space detection model construction device according to an embodiment of the present invention;
FIG. 7 is a block diagram of a parking space detecting device according to an embodiment of the present invention;
fig. 8 is a schematic structural view of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the related art, the detection of parking spaces based on ultrasound generally requires calculating an ultrasound point cloud according to ultrasound data, then clustering according to the ultrasound point cloud to obtain an obstacle, and further determining through various complicated scenes to detect parking spaces capable of parking. However, the method has long time consumption for clustering the ultrasonic point cloud and judging the complicated scene, so that the time consumption for parking space detection is long.
In view of this, according to an embodiment of the present invention, there is provided an embodiment of a parking space detection model construction method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different from that herein.
The embodiment provides a parking space detection model construction method, which can be used for vehicles, such as a control system or an electronic control unit of the vehicle, and can also be used for cloud end in communication connection with the vehicle. Fig. 1 is a flowchart of a parking space detection model construction method according to an embodiment of the present invention, as shown in fig. 1, the flowchart includes the following steps:
step S101, sample vehicle site clouds and corresponding sample vehicle spaces are obtained.
Specifically, since there is basically no disclosed data set related to ultrasonic parking space detection, if a lot of manpower and material resources are required for generating the sample vehicle site cloud and the corresponding respective sample parking spaces by manual marking, in the step S101, the conventional method of calculating the ultrasonic point cloud according to the ultrasonic data may be adopted, then clustering is performed according to the ultrasonic point cloud to obtain an obstacle, and parking space detection is implemented by a method of detecting parking in a plurality of complicated scenes, so as to obtain the sample vehicle site cloud and the corresponding respective sample parking spaces, so as to manufacture the training set. Of course, the sample vehicle site cloud and the corresponding sample vehicle spaces can be obtained based on other related methods for detecting the sample vehicle spaces according to the ultrasonic point cloud to manufacture a training set, and the obtaining modes of the sample vehicle site cloud and the corresponding sample vehicle spaces are not limited.
Step S102, carrying out coordinate transformation based on the sample vehicle position cloud and each sample vehicle position to obtain a sample point cloud image with a preset image size and sample vehicle position information of each sample vehicle position.
It is worth to say that, because the parking space point cloud of ultrasonic detection increases along with the vehicle progress, therefore the quantity of parking space point cloud of every frame is not fixed, in order to fixed input to conveniently preset parking space detection model study, after obtaining sample car site cloud and corresponding sample parking stall, need change sample car site cloud into the sample point cloud image of fixed size, so that preset parking space detection model's input data size is unified.
Specifically, the sample consist information includes sample location information of the sample consist, sample confidence, and sample probabilities corresponding to a plurality of consist types.
It should be noted that, in the above step S102, the coordinate transformation is performed based on the sample parking space, which is to determine the sample position information of the sample parking space in the converted sample point cloud image, which is originally measured based on the sample vehicle point cloud, and the sample confidence of the sample parking space and the sample probabilities corresponding to the plurality of parking space types may be obtained from the generated corresponding sample confidence and the sample probabilities corresponding to the plurality of parking space types when the sample vehicle point cloud is subjected to the parking space detection by other methods to obtain each sample parking space.
Specifically, the step S102 includes:
acquiring a point cloud range of a sample vehicle point cloud;
based on the point cloud range, the preset image size and the corresponding relation between the point cloud coordinate system corresponding to the sample vehicle point cloud and the preset image coordinate system, converting the sample vehicle point cloud and each sample parking space into the preset image coordinate system to obtain sample point cloud images with the preset image size and sample parking space information of each sample parking space.
Illustratively, as shown in fig. 2 and 3, it is assumed that a gray-scale image of a preset image size is constructed, such as: a gray image with a height and a width of 640, the initial value of the gray image is set to 0, and the upper right corner of the gray image is taken as the center of a preset image coordinate system corresponding to the gray imageAnd (5) a dot. Taking the target vehicle as the center, converting the sample vehicle site cloud in a preset range around the target vehicle into a preset image coordinate system, for example: and converting the sample vehicle point cloud with the front, back, left and right influence radius ROI of the target vehicle within 20 meters into a preset image coordinate system, and obtaining a sample point cloud image with a preset image size. The specific transformation is as follows, and the sample vehicle site cloud coordinate is assumed to be (X 1 ,Y 1 ) Its pixel coordinates on the corresponding sample point cloud image are (X 2 ,Y 2 ) Then X 2 = (X 1 -ROI)/(ROI*2/width),Y 2 = (Y 1 -ROI)/(ROI*2/height)。
Step S103, dividing the sample point cloud image into a plurality of sample grid images according to a preset dividing size.
It should be noted that, if the sample point cloud image is directly input into the preset parking space detection model, since the number of the output ultrasonic parking space corner points are different according to the scene where the target vehicle is located, the number of the parking spaces contained in the sample point cloud image is difficult to determine, so that it is difficult to constrain the output data of the preset parking space detection model, and for the preset parking space detection model constructed based on the neural network, it is necessary to fix the output of the preset parking space detection model in the same format. Therefore, the sample point cloud image needs to be divided into a plurality of sample grid images to limit the maximum number of parking spaces in each sample grid image, so as to restrict the format of the output data size of the preset parking space detection model when the sample grid image is taken as input.
Step S104, inputting each sample grid image into a preset parking space detection model, and returning the predicted parking spaces of the preset parking space number to each sample grid image to obtain predicted parking space information of the predicted parking spaces in each sample grid image, wherein the preset parking space number corresponds to the preset dividing size.
Optionally, the preset dividing size is 64×64, and the number of preset parking spaces is 2.
Specifically, the preset parking space number is the largest parking space number included in the sample grid image.
It should be noted that the preset parking space detection model is a neural network model constructed in advance.
Specifically, the predicted parking space information includes predicted position information of the predicted parking space, predicted confidence, and predicted probabilities corresponding to a plurality of parking space types.
It should be noted that, in this embodiment, the target detection concept based on the deep learning YOLO frame is used to divide the sample point cloud image into sample grid images with preset dividing sizes, each sample grid image returns to the predicted parking spaces with the preset number of parking spaces, and the predicted parking spaces can be referred to the parking space detection frame in fig. 3.
Illustratively, taking as an example that the predicted parking space information of the predicted parking space includes predicted position information, predicted confidence, and predicted probabilities corresponding to a plurality of parking space types. Assuming that the number of preset parking spaces is 2, if the number of the parking space types is 4, if: the two-sided horizontal parking space p1, the one-sided horizontal parking space p2, the two-sided vertical parking space p3 and the one-sided vertical parking space p4, the prediction probability corresponding to 4 parking space types can be obtained for each predicted parking space, and the prediction probability of 8 parking space types can be obtained for a single sample grid image. If 4 quantities of center coordinates (x, y), width w, and height h (i.e., predicted position information) need to be predicted for each predicted parking spot, 8 predicted values can be obtained by regression of 2 parking spots per sample grid. If there is a corresponding prediction confidence c for each predicted parking space, to be used for characterizing the sample parking space and the accuracy of the predicted parking space in the sample grid image, the single sample grid image corresponds to 2 prediction confidence. Thus, the output of the preset detection model can be characterized as [ x1, y1, w1, h1, c1, p11, p12, p13, p14, x2, y2, w2, h2, c2, p21, p22, p23, p24]; wherein x1 and y1 are central coordinates of a first predicted parking space in the sample grid image, w1 is a predicted width of the first predicted parking space in the sample grid image, h1 is a predicted height of the first predicted parking space in the sample grid image, c1 is a predicted position of the first predicted parking space in the sample grid image, p11, p12, p13 and p14 are predicted probabilities that the first predicted parking space in the sample grid image corresponds to the bilateral horizontal parking space p1, the unilateral horizontal parking space p2, the bilateral vertical parking space p3 and the unilateral vertical parking space p4, x2 and y2 are central coordinates of a second predicted parking space in the sample grid image, w2 is a predicted width of the second predicted parking space in the sample grid image, h2 is a predicted height of the second predicted parking space in the sample grid image, c2 is a predicted position of the second predicted parking space in the sample grid image, and p21, p22, p23 and p24 are predicted probabilities that the second predicted parking spaces in the sample grid image correspond to the bilateral horizontal parking space p1, the unilateral horizontal parking space p2, the bilateral vertical parking space p3 and the bilateral vertical parking space p4, respectively.
It should be noted that, in this embodiment, unlike the target detection concept based on the deep learning YOLO frame, it is considered that the prediction probabilities of two bounding boxes in the same sample grid image corresponding to different types are the same, and since there may be a plurality of parking spaces of the parking space type in the same space, for example: the parking garage can be a vertical parking garage or a horizontal parking garage. Each predicted spot has an independent probability of prediction for a different spot type herein.
Step S105, carrying out parking space matching on the sample parking spaces and the predicted parking spaces which are located in the same sample grid image, and carrying out parameter updating on a preset parking space detection model based on a parking space matching result, sample parking space information and predicted parking space information to obtain a target parking space detection model so as to carry out parking space detection based on the target parking space detection model.
According to the parking space detection model construction method, the sample vehicle site cloud is converted into the sample point cloud image with the preset image size, sample parking space information of the corresponding sample parking space on the sample point cloud image is obtained, then the sample point cloud image is divided into a plurality of sample grid images according to the preset division size, the sample grid images are returned to the preset number of predicted parking spaces through the preset parking space detection model, so that the predicted parking space information of the predicted parking spaces in each sample grid image is obtained, the sizes of input data and output data of the preset parking space detection model can be limited, and therefore the preset parking space model can be effectively trained to obtain the target parking space detection model. Therefore, in the subsequent parking space detection process, only the acquired target vehicle site cloud is converted into a target point cloud image with a preset image size, the parking space detection can be performed based on the target parking space detection model and the target point cloud image, and the time consumption of the parking space detection can be reduced without performing point cloud clustering and complicated scene judgment.
Fig. 4 is a flowchart of another parking space detection model construction method according to an embodiment of the present invention, as shown in fig. 4, the step 105 includes:
step S1051, carrying out parking space matching on sample parking spaces and predicted parking spaces which are positioned in the same sample grid image, and obtaining a parking space matching result;
step S1052, processing the sample parking space information and the predicted parking space information based on the parking space matching result to obtain model loss;
and step S1053, updating parameters of the preset parking space detection model based on model loss to obtain a target parking space detection model.
According to the parking space detection model construction method provided by the embodiment, firstly, a sample parking space and a predicted parking space which are positioned in the same sample grid image are subjected to parking space matching, and a parking space matching result is obtained; and processing the sample parking space information and the predicted parking space information based on the parking space matching result to obtain model loss, so that parameter updating is performed on a preset parking space detection model based on the model loss, and therefore, the detection precision of the preset parking space detection model can be improved, and the accuracy of subsequent parking space detection is improved.
Specifically, the sample parking space information includes sample position information of the sample parking space, sample confidence and sample probabilities corresponding to a plurality of parking space types; the predicted parking space information comprises predicted position information of predicted parking spaces, predicted confidence and predicted probabilities corresponding to a plurality of parking space types; then, the step S1053 includes:
And a step a1, processing the sample position information and the predicted position information based on the parking space matching result to obtain the coordinate loss.
Specifically, the sample position information includes a sample center coordinate, a sample length, and a sample width; the predicted position information includes predicted center coordinates, predicted length, and predicted width. The coordinate loss in the above step a1 is calculated according to the following formula:
wherein,for coordinate loss, ++>For the co-ordinate coefficient>For the number of images of the sample grid image, B is a preset number, +.>For the first vehicle position mark for judging whether the jth predicted vehicle position in the ith sample grid image has the matched sample vehicle position, the (I) is added>For the predicted center coordinates of the jth predicted parking spot in the ith sample grid image in the x-axis direction,/>For the sample center coordinates of the sample parking space in the x-axis direction, which are matched with the jth predicted parking space in the ith sample grid image, +.>For the predicted center coordinates of the jth predicted parking spot in the ith sample grid image in the y-axis direction,/>For the sample center coordinates of the sample parking space in the y-axis direction, which are matched with the jth predicted parking space in the ith sample grid image, +.>For the predicted width of the jth predicted parking space in the ith sample grid image,/for >For and in the ith sample grid imageSample width of sample parking places matched with j predicted parking places, +.>For the predicted height of the jth predicted parking space in the ith sample grid image, +.>Sample height for sample carport matching the jth predicted carport in the ith sample grid image.
And a step a2, processing the sample confidence and the prediction confidence based on the parking space matching result to obtain the confidence loss.
Specifically, the confidence loss in the above step a2 is calculated according to the following formula:
wherein,for confidence loss, ++>For the number of images of the sample grid image, B is a preset number, +.>For the first vehicle position mark used for judging whether the matched sample vehicle position exists in the j-th predicted vehicle position in the i-th sample grid image,for confidence coefficient, ++>For a second vehicle position mark for judging whether the matched sample vehicle position does not exist in the jth predicted vehicle position in the ith sample grid image, +.>For the j-th predicted parking space in the i-th sample grid imageConfidence of prediction of->Sample confidence for the sample bin that matches the jth predicted bin in the ith sample grid image.
For the first vehicle location mark If the j-th predicted parking place in the i-th sample grid image has a matched sample parking place, a first vehicle place mark is +.>1 is shown in the specification; if the j-th predicted parking place in the i-th sample grid image does not have the matched sample parking place, a first vehicle place sign +.>Is 0. For the second vehicle position mark->,/>That is, if there is a matching sample space in the jth predicted space in the ith sample grid image, the second vehicle position flag +.>Is 0; if the j-th predicted parking place in the i-th sample grid image does not have the matched sample parking place, a second vehicle place mark is +.>1.
And a step a3, processing the sample probability and the prediction probability based on the parking space matching result to obtain the classification loss.
Specifically, the classification loss in the above step a3 is calculated according to the following formula:
wherein,for classifying loss->For the number of images of the sample grid image, B is a preset number, +.>For the first vehicle position mark used for judging whether the matched sample vehicle position exists in the j-th predicted vehicle position in the i-th sample grid image,the j-th predicted parking space in the ith sample grid image is the parking space type +.>Prediction probability of +.>Sample parking space matched with the jth predicted parking space in the ith sample grid image is a parking space type +. >Is the sample probability of the plurality of parking space types.
And a4, determining model loss according to the coordinate loss, the confidence loss and the classification loss.
For example, the coordinate loss, confidence loss, and classification loss may be weighted summed according to a preset ratio to determine model loss.
According to the parking space detection model construction method, the coordinate loss, the confidence loss and the classification loss are calculated respectively based on the parking space matching result, the sample position information, the prediction position information, the sample confidence, the prediction confidence, the sample probability and the prediction probability, and the model loss is determined based on the coordinate loss, the confidence loss and the classification loss, so that the accuracy of the model loss can be ensured, and the accuracy of the follow-up parameter updating of the preset parking space detection model is improved.
In this embodiment, a parking space detection method is further provided, which may be used for a vehicle, such as a control system or an electronic control unit of the vehicle, and may also be used for a cloud server communicatively connected to the vehicle, and fig. 5 is a flowchart of a parking space detection method according to an embodiment of the present invention, and as shown in fig. 5, the flowchart includes the following steps:
step S201, a target vehicle site cloud within a preset range of a target vehicle is acquired.
Step S202, coordinate transformation is carried out based on target vehicle site clouds so as to obtain target point cloud images with preset image sizes.
Specifically, the step S202 includes:
and converting the target vehicle site cloud into a preset image coordinate system based on the corresponding relation among the preset range, the preset image size and the point cloud coordinate system corresponding to the target vehicle site cloud and the preset image coordinate system so as to obtain a target point cloud image with the preset image size.
Further, the step S202 performs coordinate transformation on the target vehicle point cloud according to the following formula to obtain a target point cloud image with a preset image size:
X 2 = (X 1 -ROI)/(ROI*2/width),Y 2 = (Y 1 -ROI)/(ROI*2/height)
wherein X is 2 X is the pixel coordinate of the target vehicle locus cloud in the X-axis direction under a preset image coordinate system 1 For the point cloud coordinates of the target vehicle point cloud in the x-axis direction under the corresponding point cloud coordinate system, Y 2 For the pixel coordinate of the target vehicle locus cloud in the Y-axis direction under a preset image coordinate system, Y 1 For the point cloud coordinates of the target vehicle point cloud in the y-axis direction under the corresponding point cloud coordinate system, the ROI is the influence radius corresponding to the preset range, the width is the width in the preset image size, and the height is the height in the preset image size.
Step S203, dividing the target point cloud image according to a preset dividing size to obtain a plurality of target grid images.
Step S204, inputting each target grid image into a target parking space detection model, and returning the preset number of the pending parking spaces to each target grid image to obtain the information of the pending parking spaces in each target grid image, wherein the preset number of the parking spaces corresponds to the preset dividing size, and the target parking space detection model is obtained based on the parking space detection model construction method in any one of the embodiments.
Step S205, determining a target parking space and target parking space information from each undetermined parking space and corresponding undetermined parking space information.
Specifically, the pending parking space information includes pending position information of the pending parking space, a pending confidence level, and pending probabilities corresponding to a plurality of parking space types.
Specifically, the step S205 includes:
determining a target parking space from all the pending parking spaces according to the pending confidence degrees of the pending parking spaces;
determining target position information of the target parking space according to the undetermined position information corresponding to the target parking space;
determining the target parking space type of the target parking space according to the undetermined probabilities that the target parking space corresponds to the plurality of parking space types;
and taking the target position information and the target parking space type as target parking space information of the target parking space.
According to the parking space detection method, firstly, the acquired target vehicle site cloud is converted into the sample point cloud image with the preset image size, so that the size of input data of the target detection model can be restrained, then the sample point cloud image is divided into a plurality of target grid images, the target grid images are input into the target parking space detection model, the target grid images return to the undetermined parking spaces with the preset parking space number, and therefore the size of output data of the target detection model can be restrained, to obtain undetermined parking spaces and undetermined parking space information in the target grid images based on the target parking space detection model, and to determine the target parking spaces and the target parking space information, so that point cloud clustering and complicated scene judgment are not needed, and time consumption of parking space detection is reduced.
It can be understood that the invention proposes a general model input/output structure by restricting input data and output data of a preset parking space detection model/target parking space detection model, and further proposes an end-to-end ultrasonic parking space detection method based on deep learning, so that a target parking space capable of being parked can be obtained rapidly and accurately through a parking space point cloud of ultrasonic detection.
In this embodiment, a parking space detection model building device is further provided, and the device is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The embodiment provides a parking space detection model construction device, as shown in fig. 6, including:
the sample data acquisition module 301 is configured to acquire a sample vehicle site cloud and corresponding sample parking spaces;
the sample coordinate transformation module 302 is configured to perform coordinate transformation based on the sample vehicle point cloud and each sample vehicle space, so as to obtain a sample point cloud image with a preset image size and sample vehicle space information of each sample vehicle space;
a sample image dividing module 303, configured to divide a sample point cloud image into a plurality of sample grid images according to a preset dividing size;
the sample parking space prediction module 304 is configured to input each sample grid image into a preset parking space detection model, and return the predicted parking spaces of the preset parking space number to each sample grid image, so as to obtain predicted parking space information of the predicted parking spaces in each sample grid image, where the preset parking space number corresponds to a preset division size;
The model parameter updating module 305 is configured to perform a parking space matching on a sample parking space and a predicted parking space in the same sample grid image, and perform parameter updating on a preset parking space detection model based on a parking space matching result, sample parking space information and predicted parking space information, so as to obtain a target parking space detection model, so as to perform parking space detection based on the target parking space detection model.
In some alternative embodiments, the model parameter update module 305 includes:
the parking space matching unit is used for carrying out parking space matching on the sample parking spaces and the predicted parking spaces which are positioned in the same sample grid image to obtain a parking space matching result;
the loss calculation unit is used for processing the sample parking space information and the predicted parking space information based on the parking space matching result to obtain model loss;
and the parameter updating unit is used for updating parameters of the preset parking space detection model based on model loss to obtain a target parking space detection model.
In some alternative embodiments, the sample carport information includes sample location information for the sample carport, sample confidence, and sample probabilities corresponding to a plurality of carport types; the predicted parking space information comprises predicted position information of predicted parking spaces, predicted confidence and predicted probabilities corresponding to a plurality of parking space types; then, the loss calculation unit includes:
The coordinate loss calculation subunit is used for processing the sample position information and the predicted position information based on the parking space matching result to obtain coordinate loss;
the confidence coefficient loss calculation subunit is used for processing the sample confidence coefficient and the prediction confidence coefficient based on the parking space matching result to obtain a confidence coefficient loss;
the classification loss calculation subunit is used for processing the sample probability and the prediction probability based on the parking space matching result to obtain classification loss;
and the model loss calculation subunit is used for determining the model loss according to the coordinate loss, the confidence loss and the classification loss.
Specifically, the sample position information includes a sample center coordinate, a sample length, and a sample width; the predicted position information comprises a predicted center coordinate, a predicted length and a predicted width; the coordinate loss calculation subunit is specifically configured to:
the coordinate loss is calculated according to the following formula:
wherein,for coordinate loss, ++>For the co-ordinate coefficient>For the number of images of the sample grid image, B is a preset number, +.>For the first vehicle position mark for judging whether the jth predicted vehicle position in the ith sample grid image has the matched sample vehicle position, the (I) is added>The predicted center coordinates of the jth predicted parking space in the ith sample grid image in the x-axis direction, For the sample center coordinates of the sample parking space in the x-axis direction matched with the jth predicted parking space in the ith sample grid image,for the predicted center coordinates of the jth predicted parking spot in the ith sample grid image in the y-axis direction,/>For the sample center coordinates of the sample parking space in the y-axis direction, which are matched with the jth predicted parking space in the ith sample grid image, +.>For the predicted width of the jth predicted parking space in the ith sample grid image,/for>For the ith sample grid image and the jth sample grid imageSample width of sample parking space for predicting parking space matching, +.>For the predicted height of the jth predicted parking space in the ith sample grid image, +.>Sample height for sample carport matching the jth predicted carport in the ith sample grid image.
Specifically, the confidence loss calculation subunit is specifically configured to:
confidence loss was calculated according to the following formula:
wherein,for confidence loss, ++>For the number of images of the sample grid image, B is a preset number, +.>For the first vehicle position mark used for judging whether the matched sample vehicle position exists in the j-th predicted vehicle position in the i-th sample grid image,for confidence coefficient, ++>For a second vehicle position mark for judging whether the matched sample vehicle position does not exist in the jth predicted vehicle position in the ith sample grid image, +. >For the prediction confidence of the jth predicted parking space in the ith sample grid image, +.>Sample confidence for the sample bin that matches the jth predicted bin in the ith sample grid image.
Specifically, the classification loss calculation subunit is specifically configured to:
the classification loss is calculated according to the following formula:
wherein,for classifying loss->For the number of images of the sample grid image, B is a preset number, +.>For the first vehicle position mark used for judging whether the matched sample vehicle position exists in the j-th predicted vehicle position in the i-th sample grid image,the j-th predicted parking space in the ith sample grid image is the parking space type +.>Prediction probability of +.>Sample parking space matched with the jth predicted parking space in the ith sample grid image is a parking space type +.>Class is a plurality of parking space types.
As an alternative embodiment, the sample coordinate transformation module 302 includes:
the point cloud range acquisition unit is used for acquiring the point cloud range of the sample vehicle point cloud;
the sample coordinate transformation unit is used for transforming the sample vehicle site cloud and each sample parking space under the preset image coordinate system based on the corresponding relation between the point cloud range, the preset image size and the point cloud coordinate system corresponding to the sample vehicle site cloud and the preset image coordinate system so as to obtain sample point cloud images with the preset image size and sample parking space information of each sample parking space.
In this embodiment, a parking space detection device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
This embodiment provides a parking stall detection apparatus, as shown in fig. 7, including:
the target point cloud acquisition module 401 is configured to acquire a target vehicle point cloud within a preset range of a target vehicle;
the target coordinate transformation module 402 is configured to perform coordinate transformation based on a target vehicle site cloud to obtain a target point cloud image with a preset image size;
the target image dividing module 403 is configured to divide the target point cloud image according to a preset dividing size to obtain a plurality of target grid images;
the pending parking space prediction module 404 is configured to input each target grid image into a target parking space detection model, return the number of pending parking spaces of the preset parking spaces to each target grid image, so as to obtain information of the pending parking spaces in each target grid image, where the number of preset parking spaces corresponds to a preset division size, and the target parking space detection model is obtained based on the parking space detection model building device of any embodiment;
The target parking space determining module 405 is configured to determine a target parking space and target parking space information from each pending parking space and corresponding pending parking space information.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The parking space detecting device in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC (Application Specific Integrated Circuit ) circuit, a processor and a memory executing one or more software or fixed programs, and/or other devices that can provide the above functions.
The embodiment of the invention also provides computer equipment, which is provided with the parking space detection model construction device shown in the figure 6 and/or the parking space detection device shown in the figure 7.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 8, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 8.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example in fig. 8.
The input device 30 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (10)

1. The construction method of the parking space detection model is characterized by comprising the following steps of:
acquiring sample vehicle site clouds and corresponding sample parking spaces;
performing coordinate transformation based on the sample vehicle site cloud and each sample vehicle position to obtain a sample point cloud image with a preset image size and sample vehicle position information of each sample vehicle position;
dividing the sample point cloud image into a plurality of sample grid images according to a preset dividing size;
inputting each sample grid image into a preset parking space detection model, and returning the predicted parking spaces of the preset parking space number to each sample grid image to obtain predicted parking space information of the predicted parking spaces in each sample grid image, wherein the preset parking space number corresponds to the preset dividing size;
and carrying out parking space matching on the sample parking spaces and the predicted parking spaces which are positioned in the same sample grid image, and carrying out parameter updating on a preset parking space detection model based on a parking space matching result, sample parking space information and the predicted parking space information to obtain a target parking space detection model so as to carry out parking space detection based on the target parking space detection model.
2. The method for constructing a parking space detection model according to claim 1, wherein the performing parking space matching on the sample parking space and the predicted parking space in the same sample grid image, and performing parameter updating on a preset parking space detection model based on a parking space matching result, the sample parking space information and the predicted parking space information to obtain a target parking space detection model comprises:
carrying out parking space matching on the sample parking spaces positioned in the same sample grid image and the predicted parking spaces to obtain a parking space matching result;
processing the sample parking space information and the predicted parking space information based on the parking space matching result to obtain model loss;
and updating parameters of a preset parking space detection model based on the model loss to obtain a target parking space detection model.
3. The parking space detection model construction method according to claim 2, wherein the sample parking space information includes sample position information of the sample parking space, sample confidence and sample probabilities corresponding to a plurality of parking space types; the predicted parking space information comprises predicted position information of the predicted parking space, predicted confidence and predicted probabilities corresponding to the plurality of parking space types; based on the parking space matching result, sample parking space information of the sample parking space and predicted parking space information of the predicted parking space are processed to obtain model loss, and the method comprises the following steps:
Processing the sample position information and the predicted position information based on the parking space matching result to obtain coordinate loss;
processing the sample confidence and the prediction confidence based on the parking space matching result to obtain confidence loss;
processing the sample probability and the prediction probability based on the parking space matching result to obtain classification loss;
determining a model loss based on the coordinate loss, the confidence loss, and the classification loss.
4. The parking space detection model construction method according to claim 3, wherein the sample position information includes a sample center coordinate, a sample length, and a sample width; the predicted position information comprises a predicted center coordinate, a predicted length and a predicted width; the coordinate loss is calculated according to the following formula:
wherein,for coordinate loss, ++>For the co-ordinate coefficient>B is the preset number of images of the sample grid image, and +.>For judging whether the jth predicted parking place in the ith sample grid image has the first place sign of the matched sample parking place, the method comprises the following steps of (a)>For the predicted center coordinates of the jth predicted parking place in the ith sample grid image in the x-axis direction,/th sample grid image >For the sample center coordinates of the sample parking space in the x-axis direction, which are matched with the jth predicted parking space in the ith sample grid image,/a>For the predicted center coordinates of the jth predicted parking place in the ith sample grid image in the y-axis direction,/the sample grid image is a target>For the sample center coordinates of the sample parking space in the y-axis direction, which are matched with the jth predicted parking space in the ith sample grid image,/a>For the predicted width of the jth predicted parking space in the ith sample grid image, +.>For the sample width of the sample parking space matching the jth predicted parking space in the ith sample grid image, +.>For the predicted height of the jth predicted parking spot in the ith sample grid image,/th sample grid image>And the sample height of the sample parking space matched with the j-th predicted parking space in the i-th sample grid image.
5. The parking space detection model construction method according to claim 3, wherein the confidence loss is calculated according to the following formula:
wherein,for confidence loss, ++>B is the preset number of images of the sample grid image, and +.>For judging whether a first vehicle position mark of the matched sample vehicle position exists in the jth predicted vehicle position in the ith sample grid image, +. >For confidence coefficient, ++>For judging whether a second vehicle position mark of the matched sample vehicle position does not exist in the jth predicted vehicle position in the ith sample grid image, +.>For the prediction confidence of the jth predicted parking place in the ith sample grid image, +.>And (3) the sample confidence of the sample parking space matched with the j-th predicted parking space in the i-th sample grid image.
6. The parking space detection model construction method according to claim 3, wherein the classification loss is calculated according to the following formula:
wherein,for classifying loss->B is the preset number of images of the sample grid image, and +.>For the first vehicle position mark for judging whether the matched sample vehicle position exists in the jth predicted vehicle position in the ith sample grid image,/a vehicle position detection device is used for detecting whether the matched sample vehicle position exists in the jth predicted vehicle position in the ith sample grid image>The jth predicted parking space in the ith sample grid image is the parking space type +.>Prediction probability of +.>The sample parking space matched with the jth predicted parking space in the ith sample grid image is the parking space type +.>Is the sample probability of the plurality of parking space types.
7. The method for constructing a parking space detection model according to claim 1, wherein the performing coordinate transformation based on the sample vehicle site cloud and each sample parking space to obtain a sample point cloud image with a preset image size and sample parking space information of each sample parking space includes:
Acquiring a point cloud range of the sample vehicle point cloud;
and converting the sample vehicle site cloud and each sample parking space into the preset image coordinate system based on the point cloud range, the preset image size and the corresponding relation between the point cloud coordinate system corresponding to the sample vehicle site cloud and the preset image coordinate system so as to obtain a sample point cloud image with the preset image size and sample parking space information of each sample parking space.
8. The parking space detection method is characterized by comprising the following steps of:
acquiring a target vehicle site cloud in a preset range of a target vehicle;
performing coordinate transformation based on the target vehicle locus cloud to obtain a target point cloud image with a preset image size;
dividing the target point cloud image according to a preset dividing size to obtain a plurality of target grid images;
inputting each target grid image into a target parking space detection model, and returning the preset number of parking spaces to each target grid image to obtain the information of the preset number of the parking spaces in each target grid image, wherein the preset number of the parking spaces corresponds to the preset dividing size, and the target parking space detection model is obtained based on the parking space detection model construction method according to any one of claims 1 to 7;
And determining a target parking space and target parking space information from each undetermined parking space and the corresponding undetermined parking space information.
9. A computer device, comprising:
the parking space detection model construction method according to any one of claims 1 to 7 and/or the parking space detection method according to claim 8 is/are executed by the processor.
10. A computer-readable storage medium, wherein computer instructions for causing a computer to execute the parking space detection model construction method according to any one of claims 1 to 7 and/or the parking space detection method according to claim 8 are stored on the computer-readable storage medium.
CN202311063234.6A 2023-08-23 2023-08-23 Parking space detection model construction method, parking space detection method, equipment and storage medium Active CN116778458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311063234.6A CN116778458B (en) 2023-08-23 2023-08-23 Parking space detection model construction method, parking space detection method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311063234.6A CN116778458B (en) 2023-08-23 2023-08-23 Parking space detection model construction method, parking space detection method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116778458A CN116778458A (en) 2023-09-19
CN116778458B true CN116778458B (en) 2023-12-08

Family

ID=88006703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311063234.6A Active CN116778458B (en) 2023-08-23 2023-08-23 Parking space detection model construction method, parking space detection method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116778458B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012053A (en) * 2023-09-28 2023-11-07 东风悦享科技有限公司 Post-optimization method, system and storage medium for parking space detection point

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491885B1 (en) * 2018-06-13 2019-11-26 Luminar Technologies, Inc. Post-processing by lidar system guided by camera information
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 Vehicle fusion positioning method for V2X and laser point cloud registration for advanced automatic driving
CN115457492A (en) * 2022-09-30 2022-12-09 苏州万集车联网技术有限公司 Target detection method and device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491885B1 (en) * 2018-06-13 2019-11-26 Luminar Technologies, Inc. Post-processing by lidar system guided by camera information
CN111949943A (en) * 2020-07-24 2020-11-17 北京航空航天大学 Vehicle fusion positioning method for V2X and laser point cloud registration for advanced automatic driving
CN115457492A (en) * 2022-09-30 2022-12-09 苏州万集车联网技术有限公司 Target detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN116778458A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN113298169B (en) Rotating target detection method and device based on convolutional neural network
KR20210135389A (en) Apparatus for recognizing an obstacle, a vehicle system having the same and method thereof
JP7274515B2 (en) Sensor solution determination method, device, equipment and storage medium
CN116778458B (en) Parking space detection model construction method, parking space detection method, equipment and storage medium
JP2021119507A (en) Traffic lane determination method, traffic lane positioning accuracy evaluation method, traffic lane determination apparatus, traffic lane positioning accuracy evaluation apparatus, electronic device, computer readable storage medium, and program
CN116645649B (en) Vehicle pose and size estimation method, device and storage medium
US20210149408A1 (en) Generating Depth From Camera Images and Known Depth Data Using Neural Networks
CN114067564B (en) Traffic condition comprehensive monitoring method based on YOLO
CN111860072A (en) Parking control method and device, computer equipment and computer readable storage medium
Gluhaković et al. Vehicle detection in the autonomous vehicle environment for potential collision warning
CN110705338A (en) Vehicle detection method and device and monitoring equipment
CN111402326A (en) Obstacle detection method and device, unmanned vehicle and storage medium
CN111523334A (en) Method and device for setting virtual forbidden zone, terminal equipment, label and storage medium
CN111950345A (en) Camera identification method and device, electronic equipment and storage medium
CN113963061A (en) Road edge distribution information acquisition method and device, electronic equipment and storage medium
CN112749701A (en) Method for generating license plate contamination classification model and license plate contamination classification method
CN108805121B (en) License plate detection and positioning method, device, equipment and computer readable medium
CN113780480B (en) Method for constructing multi-target detection and category identification model based on YOLOv5
CN108960160A (en) The method and apparatus of structural state amount are predicted based on unstructured prediction model
CN115062240A (en) Parking lot sorting method and device, electronic equipment and storage medium
CN114677662A (en) Method, device, equipment and storage medium for predicting vehicle front obstacle state
CN111753768A (en) Method, apparatus, electronic device and storage medium for representing shape of obstacle
CN111336984A (en) Obstacle ranging method, device, equipment and medium
CN111860084A (en) Image feature matching and positioning method and device and positioning system
CN112435293B (en) Method and device for determining structural parameter representation of lane line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant