CN117593720A - Parking space data mining method, device, equipment and storage medium - Google Patents

Parking space data mining method, device, equipment and storage medium Download PDF

Info

Publication number
CN117593720A
CN117593720A CN202311302035.6A CN202311302035A CN117593720A CN 117593720 A CN117593720 A CN 117593720A CN 202311302035 A CN202311302035 A CN 202311302035A CN 117593720 A CN117593720 A CN 117593720A
Authority
CN
China
Prior art keywords
image
parking space
deep learning
learning model
parking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311302035.6A
Other languages
Chinese (zh)
Inventor
赵振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Priority to CN202311302035.6A priority Critical patent/CN117593720A/en
Publication of CN117593720A publication Critical patent/CN117593720A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of data mining, and discloses a parking space data mining method, device and equipment and a storage medium. The parking space data mining method comprises the following steps: acquiring a plurality of images containing a plurality of parking spaces; inputting each image into a first deep learning model and a second deep learning model in turn to perform parking space recognition, and correspondingly outputting a first recognition result and a second recognition result; respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image; calculating the difference score of each image according to the number of the parking spaces of each difference type; and taking the image with the difference score meeting the preset condition as the parking space data and outputting the parking space data. The invention can automatically process mass image data, not only can reduce processing cost, but also can improve the value of the mined image data.

Description

Parking space data mining method, device, equipment and storage medium
Technical Field
The present invention relates to the field of data mining technologies, and in particular, to a method, an apparatus, a device, and a storage medium for mining parking space data.
Background
In the field of automatic parking, detection of a parking space by a vehicle controller is generally based on a deep learning model, and a parking space detection result is output by inputting image data into a trained model for detection. While training the model requires a large amount of data. When massive image data are acquired, data with relatively high bid values need to be mined from the acquired image data to be trained on the model.
The prior art generally discloses mining data based on manual labeling. All acquired images are required to be checked manually, each frame of image is marked with a preset label, such as an indoor parking lot, an outdoor parking lot and the like, and then a preset number of images are extracted from a certain label for model training. Based on the manual label method, labels are various in variety and have a large number of repetitions, for example, the two labels, namely an outdoor parking lot and an asphalt parking lot, are partially repeated in meaning, so that a large amount of time cost and labor cost are consumed, a large amount of redundancies exist in extracted data, the iteration period of a model is increased, and the training effect of the model is reduced.
Disclosure of Invention
The invention mainly aims to provide a parking space data mining method, device, equipment and storage medium, and aims to solve the problems of poor model training effect caused by high mining cost and low mining data value of the existing model training data.
The first aspect of the invention provides a parking space data mining method, which comprises the following steps:
acquiring a plurality of images containing a plurality of parking spaces;
inputting each image into a first deep learning model and a second deep learning model in turn to perform parking space recognition, and correspondingly outputting a first recognition result and a second recognition result;
respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image;
calculating the difference score of each image according to the number of the parking spaces of each difference type;
and taking the image with the difference score meeting the preset condition as parking space data and outputting the parking space data.
In a first implementation manner of the first aspect of the present invention, the counting the number of parking spaces of different difference types in the first recognition result and the second recognition result corresponding to each image includes:
comparing the first recognition result and the second recognition result corresponding to each image respectively;
according to the comparison result, determining the respective matched parking spaces and the respective unmatched parking spaces of the first deep learning model and the second deep learning model in each image;
and counting the matching situations of the parking spaces identified by the first deep learning model and the second deep learning model in each image to obtain the number of the parking spaces with different difference types in each image.
In a second implementation manner of the first aspect of the present invention, the comparing the first recognition result and the second recognition result corresponding to each image includes:
respectively comparing whether a first parking space in the first recognition result corresponding to each image is matched with a second parking space in the second recognition result;
if the first deep learning model and the second deep learning model are matched, determining that the parking spaces with the same positions are identified in the same image by the first deep learning model and the second deep learning model;
if the two images are not matched, determining that the first deep learning model and the second deep learning model identify parking spaces at different positions in the same image.
In a third implementation manner of the first aspect of the present invention, the comparing whether the first parking space in the first recognition result and the second parking space in the second recognition result corresponding to each image are matched includes:
respectively calculating the overlapping area of a first parking space in the first recognition result and a second parking space in the second recognition result corresponding to each image by taking the first recognition result as a comparison standard;
judging whether the overlapping area is larger than a preset area threshold value or not;
if the overlapping area is larger than a preset area threshold, judging whether the included angle between the first parking space and the second parking space is smaller than a preset included angle threshold;
and if the included angle is smaller than a preset included angle threshold, determining that the first parking space is matched with the second parking space, otherwise, determining that the first parking space is not matched with the second parking space.
In a fourth implementation manner of the first aspect of the present invention, the counting the matching situations of the parking spaces identified by the first deep learning model and the second deep learning model in each image, to obtain the number of the parking spaces with different difference types in each image includes:
counting the parking spaces identified by the first deep learning model and the second deep learning model in the same position in each image to obtain the number of the parking spaces belonging to the first difference type in each image;
counting the parking spaces which are identified by the first deep learning model and are not identified by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the second difference type in each image;
and counting the parking spaces which are not recognized by the first deep learning model and are recognized by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the third difference type in each image.
In a fifth implementation manner of the first aspect of the present invention, a calculation formula of the difference score of each image is as follows:
score=TP/(TP+FP+FN);
wherein score represents the difference score of the image, TP represents the number of parking spaces of the first difference type, FP represents the number of parking spaces of the second difference type, and FN represents the number of parking spaces of the third difference type.
In a sixth implementation manner of the first aspect of the present invention, the outputting, as the parking space data, the image in which the difference score meets the preset condition includes:
sorting all the images based on the difference score of each image to obtain an image sequence;
and taking K images with the largest image recognition difference in the image sequence as parking space data and outputting, wherein K is the number of pre-designated mining images.
The second aspect of the present invention provides a parking space data mining apparatus, including:
the acquisition module is used for acquiring a plurality of images containing a plurality of parking spaces;
the recognition module is used for inputting each image into the first deep learning model and the second deep learning model respectively in turn to perform parking space recognition and correspondingly outputting a first recognition result and a second recognition result;
the statistics module is used for respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image;
the calculating module is used for calculating the difference score of each image according to the number of the parking spaces of each difference type;
and the output module is used for taking the image with the difference score meeting the preset condition as the parking space data and outputting the parking space data.
In a first implementation manner of the second aspect of the present invention, the statistics module includes:
the comparison unit is used for respectively comparing the first identification result and the second identification result corresponding to each image;
the determining unit is used for determining the respective matched parking spaces and the respective unmatched parking spaces of the first deep learning model and the second deep learning model in each image according to the comparison result;
the statistics unit is used for counting the parking space matching situations respectively identified by the first deep learning model and the second deep learning model in each image to obtain the number of the parking spaces with different difference types in each image.
In a second implementation manner of the second aspect of the present invention, the comparing unit is specifically configured to:
respectively comparing whether a first parking space in the first recognition result corresponding to each image is matched with a second parking space in the second recognition result;
if the first deep learning model and the second deep learning model are matched, determining that the parking spaces with the same positions are identified in the same image by the first deep learning model and the second deep learning model;
if the two images are not matched, determining that the first deep learning model and the second deep learning model identify parking spaces at different positions in the same image.
In a third implementation manner of the second aspect of the present invention, the comparing unit is further configured to:
respectively calculating the overlapping area of a first parking space in the first recognition result and a second parking space in the second recognition result corresponding to each image by taking the first recognition result as a comparison standard;
judging whether the overlapping area is larger than a preset area threshold value or not;
if the overlapping area is larger than a preset area threshold, judging whether the included angle between the first parking space and the second parking space is smaller than a preset included angle threshold;
and if the included angle is smaller than a preset included angle threshold, determining that the first parking space is matched with the second parking space, otherwise, determining that the first parking space is not matched with the second parking space.
In a fourth implementation manner of the second aspect of the present invention, the statistics unit is specifically configured to:
counting the parking spaces identified by the first deep learning model and the second deep learning model in the same position in each image to obtain the number of the parking spaces belonging to the first difference type in each image;
counting the parking spaces which are identified by the first deep learning model and are not identified by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the second difference type in each image;
and counting the parking spaces which are not recognized by the first deep learning model and are recognized by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the third difference type in each image.
In a fifth implementation manner of the second aspect of the present invention, a calculation formula of the difference score of each image is as follows:
score=TP/(TP+FP+FN);
wherein score represents the difference score of the image, TP represents the number of parking spaces of the first difference type, FP represents the number of parking spaces of the second difference type, and FN represents the number of parking spaces of the third difference type.
In a sixth implementation manner of the second aspect of the present invention, the output module is configured to:
sorting all the images based on the difference score of each image to obtain an image sequence;
and taking K images with the largest image recognition difference in the image sequence as parking space data and outputting, wherein K is the number of pre-designated mining images.
A third aspect of the present invention provides a computer apparatus comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the computer device to perform the parking space data mining method described above.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described parking space data mining method.
According to the technical scheme provided by the invention, each image in the acquired parking space image set is input into a first deep learning model and a second deep learning model respectively for parking space recognition, and a first recognition result and a second recognition result are correspondingly output; respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image; calculating the difference score of each image according to the number of the parking spaces of each difference type; and finally, taking the image with the difference score meeting the preset condition as the parking space data and outputting the parking space data. The method of the invention does not need manual intervention, can automatically process mass data, and further can save a great deal of cost. In addition, because no subjective judgment process is brought by manual screening, the value of the mined data is higher, and the model training effect is better.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a parking space data mining method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of a parking space data mining apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of one embodiment of a computer device in an embodiment of the invention.
Detailed Description
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and one embodiment of a parking space data mining method in an embodiment of the present invention includes:
101. acquiring a plurality of images containing a plurality of parking spaces;
in this embodiment, the object of the parking space data mining is a pre-acquired image including a plurality of parking spaces, and the parking spaces and the number of parking spaces included in different images may be the same or different. For example, image 1 contains 1-10 and image 2 contains 3-12.
102. Inputting each image into a first deep learning model and a second deep learning model in turn to perform parking space recognition, and correspondingly outputting a first recognition result and a second recognition result;
in this embodiment, the first deep learning model and the second deep learning model are parking space detection models which are trained in advance and are used for identifying parking spaces. The training algorithm adopted for each of the first deep learning model and the second deep learning model is not limited.
In order to improve the data mining effect, a first deep learning model and a second deep learning model which are obtained through training by different training algorithms are preferably adopted. In an embodiment, the first deep learning model is preferably a model trained based on a training algorithm of a detection frame or a key point, such as a model training algorithm of YOLO, SSD, etc.; the second deep learning model is preferably a model trained based on a semantic segmentation training algorithm, such as a model training algorithm of FCN, U-Net, etc.
Assuming that 100 images are obtained, the 100 images are required to be respectively input into a first deep learning model and a second deep learning model to perform parking space recognition, so that a first recognition result of the 100 images respectively output by the first deep learning model and a second recognition result of the 100 images respectively output by the second deep learning model are correspondingly obtained, namely each image corresponds to 1 first recognition result and 1 second recognition result respectively.
In this embodiment, it is preferable that the first recognition result and the second recognition result adopt the same parking space detection result representation mode, for example, four vertex coordinates representing a parking space boundary, or a parking space center point+a width and a height of a parking space frame+a parking space orientation. The following is a specific illustration of four vertex coordinates of a parking space boundary.
103. Respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image;
in this embodiment, in order to facilitate the quantitative analysis of the recognition results of both the first deep learning model and the second deep learning model, an image recognition difference type is introduced.
In one embodiment, the image recognition difference type includes:
first difference Type (TP): the method comprises the steps that a first deep learning model identifies a parking space at a certain position of an image, and a second deep learning model also identifies the parking space at the same position;
second difference type (FP): representing a certain position of the image, wherein the first deep learning model does not recognize the parking space, and the second deep learning model recognizes the parking space at the same position;
third difference type (FN): and representing a certain position of the image, wherein the first deep learning model identifies the parking space, and the second deep learning model does not identify the parking space at the same position.
In this embodiment, since the first deep learning model and the second deep learning model are obtained by training with different training algorithms, there is a difference in the image recognition methods of the two models. Therefore, the parking space in the image may be identified by the first deep learning model and the second deep learning model, or may be identified by the first deep learning model or the second deep learning model. It should be noted that, for the image of the parking space that is not identified by the first deep learning model and the second deep learning model, the situation can be regarded as that the identification of the image by different models is not different, so that the statistics of the number of the parking spaces of different types is not participated.
For example, if there are spaces 1-5 in the image a, the first deep learning model identifies spaces 1-4, the second deep learning model identifies spaces 2-5, and it is known from the definition that the spaces belonging to the first difference type are spaces 2-4, the spaces belonging to the second difference type are spaces 5, and the spaces belonging to the third difference type are spaces 1.
104. Calculating the difference score of each image according to the number of the parking spaces of each difference type;
the number of parking spaces of different difference types corresponding to each image can be obtained through step 103, and in this embodiment, in order to facilitate comprehensive evaluation of the overall differences of different images, a difference score is introduced, and the difference score is used for measuring the difference of the results of the images identified by different models.
In one embodiment, the difference score for each image is calculated as follows:
score=TP/(TP+FP+FN);
wherein score represents the difference score of the image, TP represents the number of parking spaces of the first difference type, FP represents the number of parking spaces of the second difference type, and FN represents the number of parking spaces of the third difference type.
In this embodiment, as is known from the definition of the difference type, the larger the score value, the smaller the difference of the image, and the smaller the score value, the larger the difference of the image.
105. And taking the image with the difference score meeting the preset condition as parking space data and outputting the parking space data.
In this embodiment, the difference is represented by adopting different model recognition for the same image, and the more obvious the difference between the two recognition results is, the higher the training value of the image is, and the better the parking space detection model can be trained by adopting the image with obvious difference for model training.
In this embodiment, the magnitude of the difference score is related to the magnitude of the difference of the images, specifically, the correlation (positive correlation or negative correlation) between the two needs to be determined based on a calculation formula of the difference score of the images, so that the setting of the preset condition needs to be based on the correlation (positive correlation or negative correlation) between the two, for example, the magnitude of the difference score and the magnitude of the difference of the images, then the image with high difference score is selected as the excavated parking space data, and otherwise, the image with low difference score is selected as the excavated parking space data.
In one embodiment, the step 105 includes:
sorting all the images based on the difference score of each image to obtain an image sequence;
and taking K images with the largest image recognition difference in the image sequence as parking space data and outputting, wherein K is the number of pre-designated mining images.
In this optional embodiment, the images may be ranked according to the difference score, and then K images with the largest difference in image recognition are selected from the image sequence as the mined parking space data. For example, assuming that the magnitude of the difference score is inversely related to the magnitude of the difference of the images, the smaller the score value is, the larger the difference of the images is, and the first K images with the smallest score are selected as the mined parking space data, that is, the training sample images for the parking space detection model, in order from low to high of the difference score.
In the embodiment, each image in the acquired parking space image set is input into a first deep learning model and a second deep learning model respectively for parking space recognition, and a first recognition result and a second recognition result are correspondingly output; respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image; calculating the difference score of each image according to the number of the parking spaces of each difference type; and finally, taking the image with the difference score meeting the preset condition as the parking space data and outputting the parking space data. The method of the invention does not need manual intervention, can automatically process mass data, and further can save a great deal of cost. In addition, because no subjective judgment process is brought by manual screening, the value of the mined data is higher, and the model training effect is better.
In one embodiment, the step 103 includes:
1031. comparing the first recognition result and the second recognition result corresponding to each image respectively;
1032. according to the comparison result, determining the respective matched parking spaces and the respective unmatched parking spaces of the first deep learning model and the second deep learning model in each image;
1033. and counting the matching situations of the parking spaces identified by the first deep learning model and the second deep learning model in each image to obtain the number of the parking spaces with different difference types in each image.
In this embodiment, differences of recognition results of different models are adopted to evaluate mining values of different images, the differences of the model recognition results are embodied as whether the parking space recognition results for the same position in the same image are matched (the same or approximately the same), if the parking space recognition results are matched, no difference or small difference exists in the images, and if the parking space recognition results are not matched, large difference exists in the images.
In this embodiment, by comparing recognition results of different models, a parking space in each image, in which the first deep learning model and the second deep learning model are respectively matched, and a parking space in each image, in which the first deep learning model and the second deep learning model are respectively not matched, are determined. For example, there are actually 5 parking spaces in the image a, the first deep learning model identifies 4 parking spaces, the second deep learning model also identifies 4 parking spaces, by comparing the two identification results, two matching parking spaces are obtained, assuming that the two matching parking spaces are 2-4, and assuming that the two matching parking spaces are 1 and 5, the two matching parking spaces in the image a can be comprehensively determined to be 3, and 2 matching parking spaces are obtained, and then according to the definition of the difference type in the above embodiment, it is known that: the first different type of parking spaces are 2-4 in number and 3 in number; the second different type of parking spaces are parking spaces 5 and the number of the parking spaces is 1; the third difference type of parking spaces is parking space 1 and the number is 1.
In one embodiment, the step 1031 includes:
10311. respectively comparing whether a first parking space in the first recognition result corresponding to each image is matched with a second parking space in the second recognition result;
10312. if the first deep learning model and the second deep learning model are matched, determining that the parking spaces with the same positions are identified in the same image by the first deep learning model and the second deep learning model;
10313. if the two images are not matched, determining that the first deep learning model and the second deep learning model identify parking spaces at different positions in the same image.
In this embodiment, assuming that, for the recognition of the image a, there are spaces A1, A2, A3 in the first recognition result, and spaces B1, B2, B3 in the second recognition result, it is necessary to compare A1 with B1, B2, B3, A2 with B1, B2, B3, A3 with B1, B2, B3, and finally determine the matching space according to the comparison result, specifically: if the comparison result shows that the matched parking spaces exist, the first deep learning model and the second deep learning model identify the parking spaces with the same position in the same image, otherwise, the first deep learning model and the second deep learning model identify the parking spaces with different positions in the same image.
In one embodiment, the step 10311 includes:
10312. respectively calculating the overlapping area of a first parking space in the first recognition result and a second parking space in the second recognition result corresponding to each image by taking the first recognition result as a comparison standard;
10313. judging whether the overlapping area is larger than a preset area threshold value or not;
10314. if the overlapping area is larger than a preset area threshold, judging whether the included angle between the first parking space and the second parking space is smaller than a preset included angle threshold;
10315. and if the included angle is smaller than a preset included angle threshold, determining that the first parking space is matched with the second parking space, otherwise, determining that the first parking space is not matched with the second parking space.
In this embodiment, a parking space area+parking space orientation is used to compare whether a first parking space in a first recognition result is matched with a second parking space in a second recognition result. Specifically, the area of the parking space is calculated through four vertex coordinates in the recognition result, and the included angle between the first parking space and the second parking space is calculated through the central axis of the parking space. If the calculation result is: the overlapping area of the first parking space in the first recognition result and the second parking space in the second recognition result is larger than a preset area threshold, the included angle between the first parking space and the second parking space is smaller than a preset included angle threshold (the parking space faces the same), the first parking space is determined to be matched with the second parking space, and otherwise, the first parking space is not matched with the second parking space. The preset area threshold and the preset included angle threshold can be set based on the parking space area and the parking space orientation of the sampling site parking space.
In one embodiment, the step 1033 includes:
10331. counting the parking spaces identified by the first deep learning model and the second deep learning model in the same position in each image to obtain the number of the parking spaces belonging to the first difference type in each image;
10332. counting the parking spaces which are identified by the first deep learning model and are not identified by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the second difference type in each image;
10333. and counting the parking spaces which are not recognized by the first deep learning model and are recognized by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the third difference type in each image.
In this embodiment, according to the respective matching parking spaces and the respective non-matching parking spaces of the first deep learning model and the second deep learning model in each image obtained in step 1032, the first deep learning model and the second deep learning model are correspondingly determined to identify the same position of the parking spaces and the different positions of the parking spaces in the same image, and then the number TP of the parking spaces belonging to the first difference type, the number FP of the parking spaces belonging to the first difference type and the number FN of the parking spaces belonging to the third difference type in each image are respectively counted.
For example, there are 5 parking spaces in the image a actually, the first deep learning model identifies 4 parking spaces, the second deep learning model also identifies 4 parking spaces, and the two matching parking spaces are obtained by comparing the identification results of the two, and the two matching parking spaces are assumed to be 2-4 parking spaces; the two unmatched parking spaces, namely, the parking spaces 1 and 5, are assumed, 3 parking spaces matched with the two parking spaces and 2 parking spaces matched with the two parking spaces in the image A can be comprehensively determined, and then the definition of the difference type in the embodiment can be known: the first different type of parking spaces are 2-4 parking spaces and the number of TP is 3; the parking spaces of the second difference type are parking spaces 5 and the FP number is 1; the third difference type of parking space is parking space 1 and FN number is 1. Then, according to the calculation formula of the difference score of each image in the above embodiment, the difference score of the image a can be obtained as follows:
score=TP/(TP+FP+FN)=3/(3+1+1)=0.6;
and finally, taking the image corresponding to the difference score meeting the condition as the excavated parking space data for training a parking space detection model.
In the embodiment, the automatic identification of the acquired images is realized through the pre-trained parking space detection model, the quality evaluation of the acquired images is realized through comparing the parking space identification results output by the parking space detection model, the objective screening of the acquired images is realized through the differential score calculation, and further, the automatic identification of massive images and the automatic mining of high-quality images are realized for model training, so that the mining cost is reduced, and the model training effect is improved.
The method for mining the parking space data in the embodiment of the present invention is described above, and the following describes a device for mining the parking space data in the embodiment of the present invention, referring to fig. 2, one embodiment of the device for mining the parking space data in the embodiment of the present invention includes:
an acquisition module 201, configured to acquire a plurality of images including a plurality of parking spaces;
the recognition module 202 is configured to input each image into the first deep learning model and the second deep learning model in turn to perform parking space recognition, and correspondingly output a first recognition result and a second recognition result;
the statistics module 203 is configured to respectively count the number of parking spaces of different difference types in the first recognition result and the second recognition result corresponding to each image;
the calculating module 204 is configured to calculate a difference score of each image according to the number of parking spaces of each difference type;
and the output module 205 is configured to take the image with the difference score meeting the preset condition as parking space data and output the image.
In one embodiment, the statistics module 203 includes:
the comparison unit is used for respectively comparing the first identification result and the second identification result corresponding to each image;
the determining unit is used for determining the respective matched parking spaces and the respective unmatched parking spaces of the first deep learning model and the second deep learning model in each image according to the comparison result;
the statistics unit is used for counting the parking space matching situations respectively identified by the first deep learning model and the second deep learning model in each image to obtain the number of the parking spaces with different difference types in each image.
In an embodiment, the comparing unit is specifically configured to:
respectively comparing whether a first parking space in the first recognition result corresponding to each image is matched with a second parking space in the second recognition result;
if the first deep learning model and the second deep learning model are matched, determining that the parking spaces with the same positions are identified in the same image by the first deep learning model and the second deep learning model;
if the two images are not matched, determining that the first deep learning model and the second deep learning model identify parking spaces at different positions in the same image.
In an embodiment, the comparing unit is further configured to:
respectively calculating the overlapping area of a first parking space in the first recognition result and a second parking space in the second recognition result corresponding to each image by taking the first recognition result as a comparison standard;
judging whether the overlapping area is larger than a preset area threshold value or not;
if the overlapping area is larger than a preset area threshold, judging whether the included angle between the first parking space and the second parking space is smaller than a preset included angle threshold;
and if the included angle is smaller than a preset included angle threshold, determining that the first parking space is matched with the second parking space, otherwise, determining that the first parking space is not matched with the second parking space.
In an embodiment, the statistics unit is specifically configured to:
counting the parking spaces identified by the first deep learning model and the second deep learning model in the same position in each image to obtain the number of the parking spaces belonging to the first difference type in each image;
counting the parking spaces which are identified by the first deep learning model and are not identified by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the second difference type in each image;
and counting the parking spaces which are not recognized by the first deep learning model and are recognized by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the third difference type in each image.
In one embodiment, the difference score for each image is calculated as follows:
score=TP/(TP+FP+FN);
wherein score represents the difference score of the image, TP represents the number of parking spaces of the first difference type, FP represents the number of parking spaces of the second difference type, and FN represents the number of parking spaces of the third difference type.
In one embodiment, the output module 205 is configured to:
sorting all the images based on the difference score of each image to obtain an image sequence;
and taking K images with the largest image recognition difference in the image sequence as parking space data and outputting, wherein K is the number of pre-designated mining images.
Because the embodiment of the device part corresponds to the embodiment of the method, the description of the parking space data mining device provided by the invention refers to the embodiment of the method, and the invention is not repeated herein, and has the same beneficial effects as the parking space data mining method.
The parking space data mining device in the embodiment of the present invention is described in detail above in fig. 2 from the point of view of modularized functional entities, and the computer device in the embodiment of the present invention is described in detail below from the point of view of hardware processing.
Fig. 3 is a schematic diagram of a computer device according to an embodiment of the present invention, where the computer device 500 may have a relatively large difference due to configuration or performance, and may include one or more processors (central processing units, CPU) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) storing application programs 533 or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the computer device 500. Still further, the processor 510 may be arranged to communicate with a storage medium 530 to execute a series of instruction operations in the storage medium 530 on the computer device 500.
The computer device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input/output interfaces 560, and/or one or more operating systems 531, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. Those skilled in the art will appreciate that the computer device architecture shown in FIG. 3 is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The invention also provides a computer device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the parking space data mining method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, or may be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the parking space data mining method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The parking space data mining method is characterized by comprising the following steps of:
acquiring a plurality of images containing a plurality of parking spaces;
inputting each image into a first deep learning model and a second deep learning model in turn to perform parking space recognition, and correspondingly outputting a first recognition result and a second recognition result;
respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image;
calculating the difference score of each image according to the number of the parking spaces of each difference type;
and taking the image with the difference score meeting the preset condition as parking space data and outputting the parking space data.
2. The method for mining parking space data according to claim 1, wherein the counting the number of the parking spaces of different difference types in the first recognition result and the second recognition result corresponding to each image respectively includes:
comparing the first recognition result and the second recognition result corresponding to each image respectively;
according to the comparison result, determining the respective matched parking spaces and the respective unmatched parking spaces of the first deep learning model and the second deep learning model in each image;
and counting the matching situations of the parking spaces identified by the first deep learning model and the second deep learning model in each image to obtain the number of the parking spaces with different difference types in each image.
3. The parking space data mining method according to claim 2, wherein the comparing the first recognition result and the second recognition result corresponding to each image respectively includes:
respectively comparing whether a first parking space in the first recognition result corresponding to each image is matched with a second parking space in the second recognition result;
if the first deep learning model and the second deep learning model are matched, determining that the parking spaces with the same positions are identified in the same image by the first deep learning model and the second deep learning model;
if the two images are not matched, determining that the first deep learning model and the second deep learning model identify parking spaces at different positions in the same image.
4. The parking space data mining method according to claim 3, wherein the comparing whether the first parking space in the first recognition result and the second parking space in the second recognition result corresponding to each image are matched respectively includes:
respectively calculating the overlapping area of a first parking space in the first recognition result and a second parking space in the second recognition result corresponding to each image by taking the first recognition result as a comparison standard;
judging whether the overlapping area is larger than a preset area threshold value or not;
if the overlapping area is larger than a preset area threshold, judging whether the included angle between the first parking space and the second parking space is smaller than a preset included angle threshold;
and if the included angle is smaller than a preset included angle threshold, determining that the first parking space is matched with the second parking space, otherwise, determining that the first parking space is not matched with the second parking space.
5. The method of mining parking space data according to claim 3, wherein the counting the matching situations of the parking spaces identified by the first deep learning model and the second deep learning model in each image to obtain the number of the parking spaces with different difference types in each image includes:
counting the parking spaces identified by the first deep learning model and the second deep learning model in the same position in each image to obtain the number of the parking spaces belonging to the first difference type in each image;
counting the parking spaces which are identified by the first deep learning model and are not identified by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the second difference type in each image;
and counting the parking spaces which are not recognized by the first deep learning model and are recognized by the second deep learning model at the same position in each image, and obtaining the number of the parking spaces belonging to the third difference type in each image.
6. The method of claim 5, wherein the difference score of each image is calculated as follows:
score=TP/(TP+FP+FN);
wherein score represents the difference score of the image, TP represents the number of parking spaces of the first difference type, FP represents the number of parking spaces of the second difference type, and FN represents the number of parking spaces of the third difference type.
7. The parking space data mining method according to claim 1, wherein the outputting of the image in which the difference score satisfies a preset condition as the parking space data includes:
sorting all the images based on the difference score of each image to obtain an image sequence;
and taking K images with the largest image recognition difference in the image sequence as parking space data and outputting, wherein K is the number of pre-designated mining images.
8. The utility model provides a parking stall data mining device which characterized in that, the parking stall data mining device includes:
the acquisition module is used for acquiring a plurality of images containing a plurality of parking spaces;
the recognition module is used for inputting each image into the first deep learning model and the second deep learning model respectively in turn to perform parking space recognition and correspondingly outputting a first recognition result and a second recognition result;
the statistics module is used for respectively counting the number of the parking spaces with different difference types in the first identification result and the second identification result corresponding to each image;
the calculating module is used for calculating the difference score of each image according to the number of the parking spaces of each difference type;
and the output module is used for taking the image with the difference score meeting the preset condition as the parking space data and outputting the parking space data.
9. A computer device, the computer device comprising: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invoking the instructions in the memory to cause the computer device to perform the parking space data mining method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the carport data mining method of any one of claims 1-7.
CN202311302035.6A 2023-10-09 2023-10-09 Parking space data mining method, device, equipment and storage medium Pending CN117593720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311302035.6A CN117593720A (en) 2023-10-09 2023-10-09 Parking space data mining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311302035.6A CN117593720A (en) 2023-10-09 2023-10-09 Parking space data mining method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117593720A true CN117593720A (en) 2024-02-23

Family

ID=89917180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311302035.6A Pending CN117593720A (en) 2023-10-09 2023-10-09 Parking space data mining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117593720A (en)

Similar Documents

Publication Publication Date Title
US8078913B2 (en) Automated identification of performance crisis
CN111310850B (en) License plate detection model construction method and system, license plate detection method and system
EP2927871A1 (en) Method and device for calculating number of pedestrians and crowd movement directions
US11461584B2 (en) Discrimination device and machine learning method
CN110288017B (en) High-precision cascade target detection method and device based on dynamic structure optimization
CN110780965B (en) Vision-based process automation method, equipment and readable storage medium
CN111783844A (en) Target detection model training method and device based on deep learning and storage medium
CN113222149A (en) Model training method, device, equipment and storage medium
CN114328095A (en) Task abnormity warning method and device
CN110334775B (en) Unmanned aerial vehicle line fault identification method and device based on width learning
US10043108B2 (en) Method and apparatus for detecting and classifying active matrix organic light emitting diode panel
CN105469099B (en) Pavement crack detection and identification method based on sparse representation classification
CN108549899B (en) Image identification method and device
CN116630809A (en) Geological radar data automatic identification method and system based on intelligent image analysis
CN111597889B (en) Method, device and system for detecting target movement in video
CN113065447A (en) Method and equipment for automatically identifying commodities in image set
CN117593720A (en) Parking space data mining method, device, equipment and storage medium
CN116977904A (en) Yolov 5-based rapid large-scene-identification multi-man-made garment detection method
CN111950546A (en) License plate recognition method and device, computer equipment and storage medium
US11448634B2 (en) Analysis apparatus, stratum age estimation apparatus, analysis method, stratum age estimation method, and program
CN110837953A (en) Automatic abnormal entity positioning analysis method
US9070045B2 (en) Crosstalk cascades for use in object detection
CN106156785A (en) Method for checking object and body detection device
CN112801013A (en) Face recognition method, system and device based on key point recognition and verification
CN113255440A (en) Crop leaf abnormity detection method and system based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination