CN114463252A - Parking space occupation detection method, detection device and computer readable storage medium - Google Patents
Parking space occupation detection method, detection device and computer readable storage medium Download PDFInfo
- Publication number
- CN114463252A CN114463252A CN202111572078.7A CN202111572078A CN114463252A CN 114463252 A CN114463252 A CN 114463252A CN 202111572078 A CN202111572078 A CN 202111572078A CN 114463252 A CN114463252 A CN 114463252A
- Authority
- CN
- China
- Prior art keywords
- parking space
- vehicle
- target
- determining
- intersection point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 231
- 238000000034 method Methods 0.000 claims description 52
- 238000012549 training Methods 0.000 claims description 49
- 238000000605 extraction Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 5
- 101000840267 Homo sapiens Immunoglobulin lambda-like polypeptide 1 Proteins 0.000 description 4
- 102100029616 Immunoglobulin lambda-like polypeptide 1 Human genes 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a parking space occupation detection method, a parking space occupation detection device and a computer readable storage medium. The detection method comprises the following steps: acquiring a detection image, wherein the detection image comprises at least one parking space area; determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame; determining a vehicle area from the target detection frame based on the rotation angle, wherein the vehicle area is smaller than the target detection frame; and determining a parking space occupation result based on the vehicle region and the parking space region. Through the mode, the accuracy of determining the vehicle area can be improved, and then the parking space occupation result is determined, so that the parking space and the vehicle are more accurate to correspond, and the accuracy of parking space occupation judgment is improved.
Description
Technical Field
The present application relates to the field of parking space occupation technologies, and in particular, to a method and an apparatus for detecting parking space occupation, and a computer-readable storage medium.
Background
With the increasing of the number of automobiles, the rapid development and application of a computer vision algorithm, and a video analysis algorithm are widely applied to actual tasks such as parking space monitoring, the efficiency of parking space management of a parking lot is improved.
The inventor finds that due to the planning of the parking space and the setting of the monitoring angle, the risk of wrong judgment can be caused, and the accuracy of the state of the bound parking space is reduced.
Disclosure of Invention
The technical problem that this application mainly solved is that provide detection method, detection device and the readable storage medium of computer that the parking stall occupy, can promote the accuracy of confirming the vehicle region, and then determine the parking stall and occupy the result, make the parking stall more accurate with the correspondence of vehicle, promote the parking stall and occupy the accuracy of judging.
In order to solve the above problem, a technical scheme that this application adopted provides a detection method that parking stall occupy, and this detection method includes: acquiring a detection image, wherein the detection image comprises at least one parking space area; determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame; determining a vehicle area from the target detection frame based on the rotation angle, wherein the vehicle area is smaller than the target detection frame; and determining a parking space occupation result based on the vehicle region and the parking space region.
Wherein the rotation angle comprises a first rotation angle and a second rotation angle; determining a vehicle region from the target detection frame based on the rotation angle, including: determining a diagonal line of the target detection frame; controlling the diagonal line to rotate according to the first rotation angle and the second rotation angle and intersecting the target detection frame; an area formed by points intersecting the target detection frame is taken as a vehicle area.
The target detection frame comprises a first edge, a second edge, a third edge and a fourth edge which are connected in sequence; the control diagonal rotates according to a first rotation angle and a second rotation angle, intersects with the target detection frame, and comprises: the control diagonal line rotates clockwise according to a first rotation angle by taking the central point as a reference, intersects the first edge at a first intersection point and intersects the third edge at a second intersection point; determining a connecting line between the first intersection point and the second intersection point; the control connecting line rotates clockwise according to a second rotation angle by taking the second intersection point as a reference, and intersects the second edge at a third intersection point; the control connecting line rotates anticlockwise according to a second rotation angle by taking the first intersection point as a reference, and intersects the fourth edge at a fourth intersection point; regarding an area formed by points intersecting the target detection frame as a vehicle area, the method includes: and connecting the first intersection point, the third intersection point, the second intersection point and the fourth intersection point, and taking a region formed by the first intersection point, the third intersection point, the second intersection point and the fourth intersection point as a vehicle region.
Wherein, the method also comprises: acquiring a training image; the training image is marked with a target detection frame of the target vehicle and real information, and the real information comprises an intersection point of the real frame of the target vehicle and the target detection frame; detecting the training image by using a target detection model to obtain detection information of the target vehicle, wherein the detection information of the target vehicle comprises a final detection frame of the target vehicle; and adjusting the network parameters of the target detection model according to the difference between the real information and the detection information of the target vehicle.
The target detection model comprises a feature extraction network and a classification layer; detecting the training image by using the target detection model to obtain the detection information of the target vehicle, wherein the detection information comprises the following steps: inputting the training image into a feature extraction network to obtain a multi-dimensional feature map; and inputting the multi-dimensional feature map into a classification layer to obtain the detection information of the target vehicle in the training image.
Wherein, input training image to the characteristic extraction network, obtain the multidimensional characteristic map, include: sequentially carrying out N times of downsampling on the training image by using a feature extraction network to obtain an N-dimensional initial feature map, wherein N is greater than 2; for the N-dimensional initial feature map, performing (i + 1) th upsampling on the N-dimensional initial feature map based on the N-i-dimensional initial feature map to obtain an (i + 1) th final feature map; wherein i is an integer of 0 to N-1.
The feature extraction network is a feature pyramid network FPN, the feature pyramid network FPN comprises a plurality of up-sampling layers and down-sampling layers corresponding to the up-sampling layers, and each up-sampling layer or each down-sampling layer comprises convolution with different resolutions.
Wherein, based on regional and the regional parking stall result of confirming the parking stall of vehicle, include: determining at least one target parking space area corresponding to the vehicle area; determining the intersection ratio of the vehicle area and at least one target parking space area; and determining the parking space occupation result based on the intersection ratio.
Wherein, confirm at least one target parking stall region that the vehicle region corresponds, include: acquiring at least one preset target parking space area; or, determining at least one target parking space area by using an image processing algorithm; or, determining at least one target parking space area by using the target detection model.
Wherein, confirm the intersection of vehicle region and at least a target parking stall region and compare, include: determining the intersection ratio of the vehicle area and each target parking space area; based on the intersection ratio, determining a parking space occupation result, including: if the intersection ratio is greater than or equal to a first preset value, determining that the target parking space area is occupied; if the intersection ratio is smaller than the first preset value and two intersection ratios are larger than the second preset value, determining that two adjacent target parking space areas are occupied; the second preset value is smaller than the first preset value.
In order to solve the above problem, another technical solution adopted by the present application is to provide a parking space occupancy detection device, where the detection device includes a processor and a memory coupled to the processor; wherein the memory is used for storing computer programs, and the processor is used for executing the computer programs so as to realize the method provided by the technical scheme.
In order to solve the above problem, another technical solution adopted by the present application is to provide a computer-readable storage medium for storing a computer program, which when executed by a processor is used for implementing the method provided by the above technical solution.
The beneficial effect of this application is: be different from prior art's condition, the detection method that this application provided a parking stall occupied includes: acquiring a detection image, wherein the detection image comprises at least one parking space area; determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame; determining a vehicle area from the target detection frame based on the rotation angle, wherein the vehicle area is smaller than the target detection frame; and determining a parking space occupation result based on the vehicle region and the parking space region. Through the mode, the vehicle area is determined from the target detection frame by the determined rotation angle, the accuracy of determining the vehicle area can be improved, and then the parking space occupation result is determined, so that the parking space and the vehicle correspond more accurately, and the parking space occupation judgment accuracy is improved.
Drawings
Fig. 1 is a schematic flowchart of an embodiment of a method for detecting parking space occupancy according to the present application;
FIG. 2 is a schematic view of an embodiment of the present application providing a detection image without a vehicle;
fig. 3 and fig. 4 are schematic views of an application scenario of the detection method for parking space occupancy provided by the present application;
fig. 5 and fig. 6 are schematic diagrams of another application scenario of the detection method for parking space occupancy provided by the present application;
fig. 7 is a schematic flowchart of another embodiment of a method for detecting parking space occupancy according to the present application;
FIG. 8 is a schematic flow chart diagram illustrating one embodiment of step 55 provided herein;
fig. 9 is a schematic view of another application scenario of the detection method for parking space occupancy provided in the present application;
fig. 10 is a schematic flowchart of another embodiment of a method for detecting parking space occupancy according to the present application;
FIG. 11 is a schematic flow chart diagram illustrating one embodiment of steps 85 and 86 provided herein;
fig. 12 is a schematic view of another application scenario of the detection method for parking space occupancy provided in the present application;
fig. 13 is a schematic flowchart of another embodiment of a method for detecting parking space occupancy according to the present application;
FIG. 14 is a schematic flow chart diagram illustrating one embodiment of a target detection model training process provided herein;
FIG. 15 is a schematic flow chart diagram illustrating an embodiment of step 122 provided herein;
FIG. 16 is a schematic structural diagram of an embodiment of a feature extraction network provided herein;
FIG. 17 is a schematic diagram of one embodiment of a convolutional layer in a feature extraction network provided herein;
fig. 18 is a schematic view of another application scenario of the detection method for parking space occupancy provided in the present application;
fig. 19 is a schematic view of another application scenario of the detection method for parking space occupancy provided in the present application;
FIG. 20 is a diagram illustrating the results of the non-silence angle loss training provided by the present application;
FIG. 21 is a diagram illustrating the training result of the silence angle loss provided by the present application;
FIG. 22 is a diagram illustrating the training result of the silence angle loss provided by the present application;
fig. 23 is a schematic flowchart of an embodiment of a device for detecting occupancy in a parking space according to the present application;
FIG. 24 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The inventor has found that there are two outdoor parking lot scenes based on the camera installation angle and the multi-region distribution condition of the parking area, and one is horizontal to the parking space line and the camera view angle and the other is non-horizontal to the parking space line and the camera view angle from the image. When the visual angle is horizontal, the head, the tail or the body of the vehicle is over against the camera, and the external rectangular frame of the vehicle is highly matched with the parking space line of the parking space; when the visual angle is non-horizontal, there is sheltering from between the vehicle, and the parking stall line becomes the slope quadrangle by horizontal rectangle, and non-horizontally angle is big more, shelters from more seriously, and parking stall line gradient is big more, leads to the external rectangle frame of vehicle, can stride a plurality of parking stalls lines, no matter bind the parking stall line by the external rectangle frame central point of vehicle this moment, still bind external rectangle frame by parking stall line central point, all can have the risk that produces the judgement mistake, lead to the rate of accuracy of binding the parking stall state to descend. Based on this, the present application proposes the following technical solutions to solve any of the above technical problems.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a method for detecting parking space occupancy provided by the present application. The method comprises the following steps:
step 11: and acquiring a detection image, wherein the detection image comprises at least one parking space area.
In some embodiments, image acquisition may be performed using an image acquisition device. For example, a camera is used to capture an image of the target area, and the captured image is used as a detection image. Wherein, be provided with at least one parking stall region on the target area. As shown in fig. 2, the detection image includes a parking space region a, a parking space region B, a parking space region C, a parking space region D, a parking space region E, a parking space region F, a parking space region G, a parking space region H, a parking space region I, a parking space region J, a parking space region K, and a parking space region L. And the vehicle can be parked in each parking space area. In fig. 2, the parking space lines in the parking space areas are not consistent, that is, the directions of the vehicles are also inconsistent after the vehicles are parked.
Step 12: determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame.
If the vehicle is parked in the area corresponding to the detection image, the target detection frame and the rotation angle corresponding to the target vehicle in the detection image can be determined.
In some implementations, the target detection model may be used to determine a target detection frame and a rotation angle corresponding to the target vehicle in the detection image. For example, the rotation angle may be an angle between a diagonal line of the vehicle and a diagonal line of a rectangle. The angle of one side of the vehicle to one side of the rectangle is also possible.
Step 13: a vehicle area is determined from the target detection frame based on the rotation angle, the vehicle area being smaller than the target detection frame.
The target detection frame is rectangular, that is, can be a circumscribed rectangular frame of the target vehicle. From the rotation angle, the actual area of the target vehicle can be determined.
In some implementations, the following is described in conjunction with fig. 3 and 4:
the rectangle abcd of the target detection frame is shown in fig. 3, and includes a first side ab, a second side bc, a third side cd, and a fourth side da connected in sequence. The diagonal line is ac.
The control diagonal line rotates clockwise by the rotation angle α with reference to the center point O, intersects the first edge ab at a first intersection point a ', and intersects the third edge cd at a second intersection point c'.
The control diagonal line rotates counterclockwise by the rotation angle α with reference to the center point O, intersects the second side bc at a third intersection point b ', and intersects the fourth side da at a fourth intersection point d'.
As shown in fig. 4, a region formed by the first intersection a ', the third intersection b', the second intersection c ', and the fourth intersection d' is defined as a vehicle region, and the first intersection a ', the third intersection b', the second intersection c ', and the fourth intersection d' are connected in this order. I.e. the quadrangle a 'b' c'd' as vehicle area.
In other embodiments, the description is made in conjunction with fig. 5 and 6:
the rectangle abcd of the target detection frame is shown in fig. 5, and includes a first side ab, a second side bc, a third side cd, and a fourth side da connected in sequence. The diagonal line is ac.
The control diagonal line rotates clockwise by the rotation angle α with reference to the center point O, intersects the first edge ab at a first intersection point a ', and intersects the third edge cd at a second intersection point c'.
Then find a point b 'on the second side bc and a point d' on the fourth side da, making c 'b'. DELTA.a 'b', and c'd'. DELTA.a'd', as shown in FIG. 6. And connecting a ', b', c 'and d' in sequence, and taking the rectangular area formed by a ', b', c 'and d' as the vehicle area. I.e. the quadrangle a 'b' c'd' as vehicle area.
Step 14: and determining a parking space occupation result based on the vehicle region and the parking space region.
The image of the parking space area is acquired in advance after the parking space is planned. After the vehicle area is determined, the parking space area where the vehicle area is located can be determined. Specifically, the determination may be made based on image coordinates. If the image acquisition device is fixed, the image coordinates of the parking space area are fixed, and after the vehicle area is determined, the image coordinates of the vehicle area are also fixed. Therefore, whether the parking space is occupied or not can be determined according to the overlapping condition of the parking space area and the vehicle area.
In an application scenario, the present embodiment is applied to a parking lot. The parking lot entrance is provided with a display screen. The display screen displays the parking space information of the parking lot. For example, XX idle and YY idle. When the vehicle enters the parking lot and is parked, the specific parking space occupied by the vehicle is determined by using the steps of the embodiment. And if the vehicle is not parked in the parking space area, early warning and reminding are carried out. The parking lot can also divide parking spaces into regions, such as three floors. Each floor corresponds to a plurality of parking stalls. After the remaining parking spaces of each floor are determined in the above manner, the newly entered vehicle can be prompted to enter the floor with more remaining parking spaces for parking.
In this embodiment, by acquiring a detection image, the detection image includes at least one parking space region; determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame; determining a vehicle area from the target detection frame based on the rotation angle, wherein the vehicle area is smaller than the target detection frame; the mode of parking stall occupation result is confirmed based on vehicle region and parking stall region, utilizes the rotation angle of confirming to confirm vehicle region from the target detection frame, can promote the accuracy of confirming vehicle region, and then confirms the parking stall and occupy the result, makes the parking stall more accurate with corresponding of vehicle, promotes the parking stall and occupies the accuracy of judging.
Referring to fig. 7, fig. 7 is a schematic flow chart of another embodiment of the method for detecting parking space occupancy according to the present application. The method comprises the following steps:
step 51: and acquiring a detection image, wherein the detection image comprises at least one parking space area.
Step 52: determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame.
Step 53: the diagonal of the target detection box is determined.
Because the target detection frame is rectangular, the diagonal line of the target detection frame can be determined.
Step 54: and controlling the diagonal line to rotate according to the first rotation angle and the second rotation angle and to intersect with the target detection frame.
Step 55: an area formed by points intersecting the target detection frame is taken as a vehicle area.
In some embodiments, referring to fig. 8, steps 54 and 55 may be the following flow:
step 61: the control diagonal line rotates clockwise according to a first rotation angle with the center point as a reference, intersects the first edge at a first intersection point, and intersects the third edge at a second intersection point.
The target detection frame comprises a first edge, a second edge, a third edge and a fourth edge which are connected in sequence.
Step 62: and determining a connecting line between the first intersection point and the second intersection point.
And step 63: the control connecting line rotates clockwise according to a second rotation angle by taking the second intersection point as a reference, and intersects the second edge at a third intersection point; and the control connecting line rotates anticlockwise according to the second rotation angle by taking the first intersection point as a reference, and intersects the fourth edge at a fourth intersection point.
Step 64: and sequentially connecting the first intersection point, the third intersection point, the second intersection point and the fourth intersection point, and taking the area formed by the first intersection point, the third intersection point, the second intersection point and the fourth intersection point as a vehicle area.
In an application scenario, a rectangle abcd of the target detection frame is shown in fig. 9 and includes a first side ab, a second side bc, a third side cd, and a fourth side da connected in sequence. The diagonal line is ac.
The control diagonal line rotates clockwise by a first rotation angle α with reference to the center point O, intersects the first edge ab at a first intersection point a ', and intersects the third edge cd at a second intersection point c'.
A line a 'c' between the first intersection point and the second intersection point is determined.
The control connecting line a 'c' rotates clockwise according to a second rotation angle beta by taking the second intersection point c 'as a reference, and intersects the second edge bc at a third intersection point b'; and the control connection line a 'c' rotates counterclockwise according to the second rotation angle beta with reference to the first intersection point a 'and intersects the fourth side da at a fourth intersection point d'.
And sequentially connecting the first intersection point a ', the third intersection point b', the second intersection point c 'and the fourth intersection point d', and taking a region formed by the first intersection point a ', the third intersection point b', the second intersection point c 'and the fourth intersection point d' as a vehicle region. I.e. the quadrangle a 'b' c'd' as vehicle area.
In the above process, the following is explained with reference to fig. 9: by rotating clockwise according to the first rotation angle α, the first edge ab intersects with the first intersection point a ', and the third edge cd intersects with the second intersection point c'. A connecting line a 'c' between the first intersection point and the second intersection point is formed, that is, a line is formed, the connecting line a 'c' is controlled to rotate clockwise according to a second rotation angle beta based on the second intersection point c ', and the connecting line a' c 'intersects with the second edge bc at a third intersection point b', and then b 'c' and a 'b' are formed; and the control connection line a 'c' rotates counterclockwise according to the second rotation angle beta with reference to the first intersection point a 'to intersect the fourth side da at a fourth intersection point d', thereby forming c'd' and a'd'. A quadrangle a 'b' c'd' is obtained.
The image acquisition devices at different angles and the planning angle problem of the parking space area are used, after the image is actually acquired, the vehicle area of an actual vehicle is not necessarily a rectangle but a quadrangle in the image, and therefore the vehicle area determined by adopting two rotation angles is more accurate.
In other embodiments, steps 54 and 55 may be the following flow:
the control diagonal line rotates clockwise according to a first rotation angle by taking the central point as a reference, intersects the first edge at a first intersection point and intersects the third edge at a second intersection point; determining a connecting line between the first intersection point and the second intersection point; the control connecting line rotates clockwise according to a second rotation angle by taking the second intersection point as a reference, and intersects the second edge at a third intersection point; wherein the second rotation angle and the first rotation angle are complementary. The control connecting line rotates anticlockwise according to a second rotation angle by taking the first intersection point as a reference, and intersects the fourth edge at a fourth intersection point; and connecting the first intersection point, the third intersection point, the second intersection point and the fourth intersection point, and taking a region formed by the first intersection point, the third intersection point, the second intersection point and the fourth intersection point as a vehicle region.
It is to be understood that the order of rotation of the first rotation angle and the second rotation angle in the above description is not mandatory.
Step 56: and determining a parking space occupation result based on the vehicle region and the parking space region.
In some embodiments, the parking space occupancy result may be determined based on an overlap of the vehicle region and the parking space region. And if the vehicle area and the parking space area are overlapped by seventy percent, determining that the parking space of the parking space area is occupied by the vehicle. And if the vehicle area and the parking space area are overlapped by twenty percent, determining that the vehicle is illegally parked, and performing early warning prompt.
In this embodiment, by acquiring a detection image, the detection image includes at least one parking space region; determining a target detection frame, a first rotation angle and a second rotation angle corresponding to a target vehicle in a detection image; determining a vehicle region from the target detection frame based on the first rotation angle and the second rotation angle; the mode of result is taken up to the parking stall based on regional and the parking stall of vehicle region determination, utilizes first rotation angle and the second rotation angle of determining to confirm the vehicle region from the target detection frame, can promote the regional accuracy of vehicle of confirming, and then determines the parking stall and take up the result, makes the parking stall more accurate with corresponding of vehicle, promotes the accuracy that the parking stall was taken up and is judged.
Referring to fig. 10, fig. 10 is a schematic flow chart of another embodiment of the method for detecting parking space occupancy provided by the present application. The method comprises the following steps:
step 81: and acquiring a detection image, wherein the detection image comprises at least one parking space area.
Step 82: determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame.
Step 83: a vehicle area is determined from the target detection frame based on the rotation angle, the vehicle area being smaller than the target detection frame.
In this embodiment, steps 81 to 83 have the same or similar technical solutions as any of the above embodiments, and are not described herein again.
Step 84: and determining at least one target parking space region corresponding to the vehicle region.
In some embodiments, at least one target slot area may be preconfigured. For example, the area information of the corresponding parking space area.
In some embodiments, at least one target parking space region is determined using an image processing algorithm. For example, when no vehicle stops, the image processing algorithm determines the area information of the target parking space region, or synchronously detects the area of the parking space region in the detection image.
In some embodiments, at least one target parking space region is determined using a target detection model.
And step 85: and determining the intersection ratio of the vehicle area and at least one target parking space area.
The intersection ratio refers to the ratio of the intersection and the union between the vehicle area and at least one target parking space area.
Step 86: and determining the parking space occupation result based on the intersection ratio.
In some embodiments, referring to fig. 11, steps 85 and 86 may be the following flows:
step 91: and determining the intersection ratio of the vehicle area and each target parking space area.
And step 92: and if the intersection ratio is greater than or equal to the first preset value, determining that the target parking space area is occupied.
If the intersection ratio is greater than or equal to 0.5, determining that the target parking space area calculated by the intersection ratio with the vehicle area is occupied. The vehicle and the parking space can be bound, the state of the parking space is updated, and the vehicle is changed from no vehicle to the vehicle. In other embodiments, the first preset value may be set according to a specific region of the parking space region in the detection image. Such as the first preset value may be set to 0.52 or 0.54.
Step 93: if the intersection ratio is smaller than the first preset value and two intersection ratios are larger than the second preset value, determining that the two adjacent target parking space areas are occupied; the second preset value is smaller than the first preset value.
For example, the first preset value is set to 0.5, and the second preset value is set to 0.2. And if the intersection ratio is less than 0.5 and the second intersection ratio is greater than 0.2, determining that the two adjacent target parking space areas are occupied. At the moment, the vehicle illegally parks, and occupies two parking spaces.
The following description is made with reference to fig. 12:
as shown in fig. 12, the vehicle region C1 and the vehicle region C2 are present in the detection image. And a parking space region A, a parking space region B, a parking space region C, a parking space region D, a parking space region E, a parking space region F, a parking space region G, a parking space region H, a parking space region I, a parking space region J, a parking space region K and a parking space region L exist in the detection image.
Intersection ratios of vehicle zone C1 and vehicle zone C2 with these parking space zones, respectively, are determined. As shown in fig. 10, the intersection ratio of the vehicle region C1, the parking space region a, the parking space region B, the parking space region E, the parking space region F, the parking space region I, the parking space region J, the parking space region K, and the parking space region L is 0.
The intersection ratio of the vehicle area C1 and the parking space area C is greater than 0.5, the intersection ratio of the vehicle area C1 and the parking space area D is less than 0.5, the intersection ratio of the vehicle area C1 and the parking space area G is less than 0.5, and the intersection ratio of the vehicle area C1 and the parking space area H is less than 0.5. And when the intersection ratio is greater than 0.5, determining that the parking space region C is occupied. The vehicle and the parking space can be bound, the state of the parking space is updated, and the vehicle is changed from no vehicle to the vehicle.
Continuing with fig. 12, the intersection ratio of vehicle region C2 to parking space region a, parking space region B, parking space region C, parking space region D, parking space region E, parking space region F, parking space region G, parking space region H, parking space region I, and parking space region J is 0. And if the intersection ratio of the vehicle area C2 to the parking space area K is less than 0.5 and greater than 0.2, and the intersection ratio of the vehicle area C2 to the parking space area L is less than 0.5 and greater than 0.2, determining that the parking space area K and the parking space area L are occupied. At the moment, the vehicle illegally parks, and occupies two parking spaces.
In this embodiment, by acquiring a detection image, the detection image includes at least one parking space region; determining a target detection frame, a first rotation angle and a second rotation angle corresponding to a target vehicle in a detection image; determining a vehicle region from the target detection frame based on the first rotation angle and the second rotation angle; based on the intersection and comparison of the vehicle region and the parking space region, the mode of determining the parking space occupation result utilizes the determined rotation angle to determine the vehicle region from the target detection frame, the accuracy of determining the vehicle region can be improved, and then the intersection and comparison is utilized to determine the parking space occupation result, so that the parking space and the vehicle are more accurate in correspondence, and the accuracy of parking space occupation judgment is improved.
Referring to fig. 13, fig. 13 is a schematic flow chart of another embodiment of the method for detecting parking space occupancy according to the present application. The method comprises the following steps:
step 111: and acquiring a detection image, wherein the detection image comprises at least one parking space area.
Step 112: inputting the detection image into the trained target detection model to obtain a target detection frame and a rotation angle; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame.
Step 113: a vehicle area is determined from the target detection frame based on the rotation angle, the vehicle area being smaller than the target detection frame.
Step 114: and determining a parking space occupation result based on the vehicle region and the parking space region.
Referring to fig. 14, a training process of the target detection model will be described:
step 121: acquiring a training image; the training image is marked with a target detection frame of the target vehicle and real information, and the real information comprises an intersection point of the real frame of the target vehicle and the target detection frame.
Step 122: and detecting the training image by using the target detection model to obtain the detection information of the target vehicle, wherein the detection information of the target vehicle comprises a final detection frame of the target vehicle.
In some embodiments, the target detection model includes a feature extraction network and a classification layer. Referring to fig. 15, step 122 may be the following process:
step 1221: and inputting the training image into a feature extraction network to obtain a multi-dimensional feature map.
The method comprises the following steps of sequentially carrying out N times of downsampling on a training image by using a feature extraction network to obtain an N-dimensional initial feature map, wherein N is greater than 2; for the N-dimensional initial feature map, performing (i + 1) th upsampling on the N-dimensional initial feature map based on the N-i-dimensional initial feature map to obtain an (i + 1) th final feature map; wherein i is an integer of 0 to N-1.
The feature extraction network is a feature pyramid network FPN, the feature pyramid network FPN comprises a plurality of up-sampling layers and down-sampling layers corresponding to the up-sampling layers, and each up-sampling layer or each down-sampling layer comprises convolution with different resolutions.
The following description will be made with reference to fig. 16 and 17:
the feature extraction network is a feature pyramid network FPN, which includes a plurality of upsampling layers and downsampling layers corresponding to the plurality of upsampling layers. As shown in fig. 14, the feature pyramid network FPN includes 3 upsampled layers and 3 downsampled layers corresponding to the 3 upsampled layers. And sequentially carrying out 3 times of downsampling on the training image by using a feature extraction network to obtain a corresponding 3-dimensional initial feature map.
For example, the training image Y is input to the first down-sampling layer 141 and down-sampled to output the 1 st initial feature map, the 1 st initial feature map is input to the second down-sampling layer 142 and down-sampled to output the 2 nd initial feature map, the 2 nd initial feature map is input to the third down-sampling layer 143 and down-sampled to output the 3 rd initial feature map.
Inputting the 3 rd initial feature map into the first up-sampling layer 144 for up-sampling processing, outputting the 1 st final feature map, inputting the 1 st final feature map into the second up-sampling layer 145 for up-sampling processing, outputting the 2 nd final feature map, inputting the 2 nd final feature map into the third up-sampling layer 146 for up-sampling processing, and outputting the 3 rd final feature map.
Referring to fig. 17, each of the up-sampling layer or the down-sampling layer includes convolution of different resolutions. E.g., including 1 × 1 convolution, 3 × 3 convolution, and 5 × 5 convolution. And (3) performing convolution on the feature maps by using 1 × 1 convolution, 3 × 3 convolution and 5 × 5 convolution to obtain corresponding feature maps, and then fusing the feature maps to obtain a final feature map. Different convolutions are utilized to carry out convolution operation on the characteristic diagram, so that the receptive field can be increased, and more characteristic information can be learned.
Step 1222: and inputting the multi-dimensional feature map into a classification layer to obtain the detection information of the target vehicle in the training image.
And inputting the final characteristic diagram into a classification layer to obtain the detection information of the target vehicle in the training image.
Step 123: and adjusting the network parameters of the target detection model according to the difference between the real information and the detection information of the target vehicle.
In some embodiments, the training times of the defect detection model may be adjusted according to the difference between the real information and the detection information of the target vehicle, thereby adjusting the network parameters of the target detection model. If the real information is A and the detection information is B, the training times of the target detection model can be adjusted, and the network parameters of the target detection model can be adjusted; and if the real information is A and the detection information is B, but the confidence coefficient is lower than a set threshold value, adjusting the training times of the target detection model, and further adjusting the network parameters of the target detection model.
In some embodiments, the network parameters of the target detection model may be adjusted according to the difference between the real information and the detected information of the target vehicle, and if there is a convolutional neural network in the target detection model, the number, step length, and padding of convolutional kernels may be set, the excitation function may be adjusted, the parameters of the pooling layer may be adjusted, and the like.
In some embodiments, the loss value may be calculated according to the real information and the detected information of the target vehicle, and if the loss value is different from a preset loss threshold, the network parameter of the target detection model is adjusted.
In an application scene, a yolo algorithm can be adopted to train a target detection model, and 2 dimensions are added on the basis of quintuple to complete the rotation angle prediction. The target detection model uses a pyramid network structure, a deep network identifies a large target aiming at different scenes of the large target and a small target, and the small target is identified in a shallow network after the deep network features are fused with the shallow network. The method ensures that the deep network optimizes the large target recognition in a key manner and the shallow network optimizes the small target recognition in a key manner. The ratio of the horizontal scene to the non-horizontal scene of the camera and the parking space line is 10:1, and 10 pieces of training images are covered. The feature pyramid network comprises 3 detection layers, the LOSS function of each detection layer is formed by foreground prediction, classification prediction, background prediction and coordinate set regression, prediction of two angles is supplemented, the weight of each proportion is set as follows, and the LOSS function LOSS is as follows:
before training, because the angle range takes values of 0-90, in order to reduce the large scale value in the loss function and influence regression of other prediction items, normalization processing is carried out on the angle, and the angle value range is updated to be between 0-1. And when the lambda 5 is updated, the occupied proportion of each lambda weight is consistent with that of the lambda 4. Training for 20 ten thousand times to complete model training.
Wherein the training images are calibrated in advance. Such as a bounding rectangle that circumscribes the rotating object (vehicle region), while demarcating 3 corner points A, B and C, as shown in fig. 17. And calibrating the intersection point of the vehicle and the external rectangular frame according to the angular point calibration sequence, and respectively taking the right edge, the left edge and the lower edge.
Based on an angle regression algorithm that maps the direction of rotation and wide, high scale changes, two angles, α and β, are added, as shown in fig. 18. The range of the angle alpha is 0-90 degrees, and the range of the angle beta is 0-90 degrees. Taking an integer value. Wherein alpha maps the same target, before and after rotation, the angle changes, beta maps the same target, before and after rotation, the width and height dimensions change. And calculating the two angles, calculating the actual diagonal angle beta of the rotating frame, starting from the second corner point, connecting the first corner point with the second corner point, and connecting the second corner point with the third corner point to obtain the angle between the two sides. And calculating a diagonal angle alpha of the external rectangular frame, and when the diagonal is selected, using a connecting line of a left lower corner point and a right upper corner point and a connecting line of a first corner point and a second corner point of the external rectangular frame. And (4) the algorithm model performs regression on the two angles to complete the detection of the rotating target.
By predicting the external rectangular frame of the target vehicle, the diagonal is taken, the center point is used to control the clockwise rotation angle alpha of the diagonal, the intersection point A, B with the external rectangular frame is started by the point B, the side direction of BA and the clockwise rotation angle beta are started by the intersection point C with the rectangular frame, and similarly, the intersection point D with the rectangular frame is started by the point A and the side direction of AB and the anticlockwise rotation angle beta is started by the point A, as shown in FIG. 19.
And after the vehicle area is determined, carrying out intersection ratio calculation on the same parking space area. And selecting a parking space area with the intersection ratio larger than 0.5, binding the vehicle and the parking space, and updating the parking space state to change the vehicle-absent state into the vehicle-present state.
In addition, if the intersection ratio of each parking space area to the vehicle is less than 0.5, and 2 intersection ratios are greater than 0.2, the vehicle illegally parks at the moment, and occupies two parking spaces, early warning prompt is carried out.
In the training process of the target detection model, two parameter adjusting modes, namely a non-silent angle weight and a silent angle weight, can be adopted. Wherein, the non-silent angle weight initializes that the lambda 4 and lambda 5 weights are the same from the beginning of training. The silent angle weight is that when training starts, the initial lambda 5 value is 0, and after loss is stable, the updated lambda 4 and lambda 5 weights are the same. The loss function is defined as follows.
And comparing the change of the loss values of the non-silent angle weight mode and the silent angle weight mode.
The loss training result of the non-silence angle weights is shown in fig. 20. In the training process, the rotation angles alpha and beta and the coordinates return at the same time, a small amount of up-and-down floating condition exists in a period of time, the rotation angles interfere with the rectangular frame and deformation, and loss tends to be stable along with the increase of iteration times.
The result of the silence angle loss training is shown in fig. 21 and fig. 22. And dynamically adjusting the weights in the training process. When training begins, the weights of 2 angles set a silent value of 0, and the center point and the coordinates of the circumscribed rectangular frame are mainly regressed. After stabilization, add 2 angles of loss weight. Comparing the silent training method with the non-silent training method, the non-silent method has serious loss jitter in the former part of training, the silent method has stable and convergent loss in the latter part, and the loss is increased and quickly reduced after rising, and the deformation of the rectangular frame to the rotation is quickly adjusted and regressed.
Referring to fig. 23, fig. 23 is a schematic structural diagram of an embodiment of the device for detecting parking space occupancy provided by the present application. The detection apparatus 210 includes a processor 211 and a memory 212 coupled to the processor 211; wherein the memory 212 is used for storing computer programs and the processor 211 is used for executing the computer programs to realize the following methods:
acquiring a detection image, wherein the detection image comprises at least one parking space area; determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame; determining a vehicle area from the target detection frame based on the rotation angle, wherein the vehicle area is smaller than the target detection frame; and determining a parking space occupation result based on the vehicle region and the parking space region.
It is to be understood that the processor 211 is further configured to execute a computer program to implement the method according to any of the above embodiments, and specific reference is made to any of the above embodiments, which is not described herein again.
Referring to fig. 24, fig. 24 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application. The computer-readable storage medium 220 is for storing a computer program 221, the computer program 221, when being executed by a processor, is for implementing the method of:
acquiring a detection image, wherein the detection image comprises at least one parking space area; determining a target detection frame and a rotation angle corresponding to a target vehicle in a detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents the angle between the geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame; determining a vehicle area from the target detection frame based on the rotation angle, wherein the vehicle area is smaller than the target detection frame; and determining a parking space occupation result based on the vehicle region and the parking space region.
It is to be understood that, when being executed by a processor, the computer program 221 is further configured to implement the method according to any of the embodiments, which is specifically referred to any of the embodiments above, and is not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units in the other embodiments described above may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
Claims (12)
1. The utility model provides a detection method of parking stall occupation which characterized in that, detection method includes:
acquiring a detection image, wherein the detection image comprises at least one parking space area;
determining a target detection frame and a rotation angle corresponding to a target vehicle in the detection image by using a target detection model; the target detection frame is rectangular, and the rotation angle represents an angle between a geometric characteristic line of the vehicle and the geometric characteristic line of the target detection frame;
determining a vehicle area from the target detection frame based on the rotation angle, the vehicle area being smaller than the target detection frame;
and determining a parking space occupation result based on the vehicle region and the parking space region.
2. The method of claim 1, wherein the angle of rotation comprises a first angle of rotation and a second angle of rotation;
the determining a vehicle region from the target detection frame based on the rotation angle includes:
determining a diagonal line of the target detection frame;
controlling the diagonal line to rotate according to the first rotation angle and the second rotation angle, and intersecting the target detection frame;
and taking an area formed by the points intersected with the target detection frame as the vehicle area.
3. The method of claim 2,
the target detection frame comprises a first edge, a second edge, a third edge and a fourth edge which are connected in sequence;
the controlling the diagonal line to rotate according to the first rotation angle and the second rotation angle, and intersecting with the target detection frame includes:
controlling the diagonal line to rotate clockwise according to the first rotation angle by taking a central point as a reference, and intersecting the first edge at a first intersection point and the third edge at a second intersection point;
determining a connecting line between the first intersection point and the second intersection point;
controlling the connecting line to rotate clockwise according to the second rotation angle by taking the second intersection point as a reference, and intersecting the second edge at a third intersection point;
controlling the connecting line to rotate anticlockwise according to the second rotation angle by taking the first intersection point as a reference, and intersecting the fourth edge at a fourth intersection point;
the regarding a region formed by points intersecting the target detection frame as the vehicle region includes:
and connecting the first intersection point, the third intersection point, the second intersection point and the fourth intersection point, and taking a region formed by the first intersection point, the third intersection point, the second intersection point and the fourth intersection point as the vehicle region.
4. The method of claim 1, further comprising:
acquiring a training image; wherein the training image is marked with a target detection frame of a target vehicle and real information, and the real information comprises an intersection point of the real frame of the target vehicle and the target detection frame;
detecting the training image by using a target detection model to obtain detection information of the target vehicle, wherein the detection information of the target vehicle comprises a final detection frame of the target vehicle;
and adjusting the network parameters of the target detection model according to the difference between the real information and the detection information of the target vehicle.
5. The method of claim 4, wherein the target detection model comprises a feature extraction network and a classification layer;
the detecting the training image by using the target detection model to obtain the detection information of the target vehicle comprises:
inputting the training image into a feature extraction network to obtain a multi-dimensional feature map;
and inputting the multi-dimensional feature map into the classification layer to obtain the detection information of the target vehicle in the training image.
6. The method of claim 5, wherein inputting the training image to a feature extraction network to obtain a multi-dimensional feature map comprises:
sequentially carrying out N times of downsampling on the training image by using the feature extraction network to obtain an N-dimensional initial feature map, wherein N is greater than 2;
for the N-dimensional initial feature map, performing (i + 1) th upsampling on the N-dimensional initial feature map based on the N-i-dimensional initial feature map to obtain an (i + 1) th final feature map; wherein i is an integer of 0 to N-1.
7. The method of claim 5, wherein the feature extraction network is a Feature Pyramid Network (FPN) comprising a plurality of upsampling layers and downsampling layers corresponding to the plurality of upsampling layers, each of the upsampling layers or the downsampling layers comprising a convolution of a different resolution.
8. The method of claim 1, wherein determining the parking space occupancy result based on the vehicle region and the parking space region comprises:
determining at least one target parking space region corresponding to the vehicle region;
determining the intersection ratio of the vehicle area and the at least one target parking space area;
and determining a parking space occupation result based on the intersection ratio.
9. The method of claim 8, wherein the determining at least one target parking space region corresponding to the vehicle region comprises:
acquiring at least one preset target parking space area; or the like, or, alternatively,
determining at least one target parking space area by using an image processing algorithm; or the like, or, alternatively,
and determining at least one target parking space region by using the target detection model.
10. The method of claim 8, wherein the determining the intersection ratio of the vehicle region and the at least one target space region comprises:
determining the intersection ratio of the vehicle area and each target parking space area;
the determining of the parking space occupation result based on the intersection ratio comprises:
if the intersection ratio is larger than or equal to a first preset value, determining that the target parking space area is occupied;
if the intersection ratio is smaller than the first preset value and two intersection ratios are larger than a second preset value, determining that two adjacent target parking space areas are occupied; the second preset value is smaller than the first preset value.
11. The detection device for the parking space occupation is characterized by comprising a processor and a memory coupled with the processor;
wherein the memory is adapted to store a computer program and the processor is adapted to execute the computer program to implement the method according to any of claims 1-10.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program which, when being executed by a processor, is used to carry out the method according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111572078.7A CN114463252A (en) | 2021-12-21 | 2021-12-21 | Parking space occupation detection method, detection device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111572078.7A CN114463252A (en) | 2021-12-21 | 2021-12-21 | Parking space occupation detection method, detection device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114463252A true CN114463252A (en) | 2022-05-10 |
Family
ID=81405668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111572078.7A Pending CN114463252A (en) | 2021-12-21 | 2021-12-21 | Parking space occupation detection method, detection device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463252A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114694124A (en) * | 2022-05-31 | 2022-07-01 | 成都国星宇航科技股份有限公司 | Parking space state detection method and device and storage medium |
CN114998929A (en) * | 2022-05-27 | 2022-09-02 | 江苏慧眼数据科技股份有限公司 | Fisheye camera bounding box identification method, fisheye camera bounding box identification system, fisheye camera bounding box identification equipment and application |
CN116310390A (en) * | 2023-05-17 | 2023-06-23 | 上海仙工智能科技有限公司 | Visual detection method and system for hollow target and warehouse management system |
CN117894032A (en) * | 2024-03-14 | 2024-04-16 | 上海巡智科技有限公司 | Water meter reading identification method, system, electronic equipment and storage medium |
-
2021
- 2021-12-21 CN CN202111572078.7A patent/CN114463252A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998929A (en) * | 2022-05-27 | 2022-09-02 | 江苏慧眼数据科技股份有限公司 | Fisheye camera bounding box identification method, fisheye camera bounding box identification system, fisheye camera bounding box identification equipment and application |
CN114694124A (en) * | 2022-05-31 | 2022-07-01 | 成都国星宇航科技股份有限公司 | Parking space state detection method and device and storage medium |
CN116310390A (en) * | 2023-05-17 | 2023-06-23 | 上海仙工智能科技有限公司 | Visual detection method and system for hollow target and warehouse management system |
CN116310390B (en) * | 2023-05-17 | 2023-08-18 | 上海仙工智能科技有限公司 | Visual detection method and system for hollow target and warehouse management system |
CN117894032A (en) * | 2024-03-14 | 2024-04-16 | 上海巡智科技有限公司 | Water meter reading identification method, system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114463252A (en) | Parking space occupation detection method, detection device and computer readable storage medium | |
CN106952308B (en) | Method and system for determining position of moving object | |
USRE47559E1 (en) | Parking area detecting apparatus and method thereof | |
CN112330601B (en) | Fish-eye camera-based parking detection method, device, equipment and medium | |
CN112016614B (en) | Construction method of optical image target detection model, target detection method and device | |
US8885049B2 (en) | Method and device for determining calibration parameters of a camera | |
CN110276287B (en) | Parking space detection method and device, computer equipment and storage medium | |
CN111738995B (en) | RGBD image-based target detection method and device and computer equipment | |
CN113239912B (en) | Method, device and storage medium for determining BSD image effective area | |
CN106648511A (en) | Self-adaptive display method and device of display resolutions | |
WO2023185354A1 (en) | Real location navigation method and apparatus, and device, storage medium and program product | |
CN111256693A (en) | Pose change calculation method and vehicle-mounted terminal | |
CN112150538B (en) | Method and device for determining vehicle pose in three-dimensional map construction process | |
CN115019181A (en) | Remote sensing image rotating target detection method, electronic equipment and storage medium | |
CN104834886A (en) | Method and device for detecting video image | |
CN113052071B (en) | Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle | |
CN112634628B (en) | Vehicle speed determination method, terminal and storage medium | |
CN111260608A (en) | Tongue region detection method and system based on deep learning | |
CN115346191A (en) | Method and apparatus for calibration | |
CN113147746A (en) | Method and device for detecting ramp parking space | |
CN114511836A (en) | Parking space detection method and related device | |
CN108510517B (en) | Self-adaptive visual background extraction method and device | |
CN117876669B (en) | Target detection method, device, computer equipment and storage medium | |
CN114926454B (en) | Parking space detection method and device and electronic equipment | |
JP7393432B2 (en) | parking assistance system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |