CN112560689A - Parking space detection method and device, electronic equipment and storage medium - Google Patents
Parking space detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112560689A CN112560689A CN202011495266.XA CN202011495266A CN112560689A CN 112560689 A CN112560689 A CN 112560689A CN 202011495266 A CN202011495266 A CN 202011495266A CN 112560689 A CN112560689 A CN 112560689A
- Authority
- CN
- China
- Prior art keywords
- parking space
- confidence coefficient
- fusion
- confidence
- parking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 48
- 230000004927 fusion Effects 0.000 claims abstract description 134
- 230000000007 visual effect Effects 0.000 claims abstract description 88
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a parking space detection method, a parking space detection device, electronic equipment and a storage medium; the method comprises the following steps: acquiring a visual parking space; matching the visual parking space with a preset parking space tracking list; updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces; and when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence coefficient is greater than a preset threshold value, outputting the fusion parking space. By the aid of the method and the device, reliability of the fusion parking space is improved, and accurate detection of the parking space is realized.
Description
Technical Field
The invention relates to the technical field of parking space detection, in particular to a parking space detection method and device, electronic equipment and a storage medium.
Background
The automatic parking means that the automobile is automatically parked in the parking space through an automatic parking system without manual control.
Different automatic parking systems use different methods for detecting objects around the car. Some have sensors mounted around the front and rear bumpers of the vehicle that can act as both transmitters and receivers. These sensors send signals that are reflected back when they hit obstacles around the vehicle body. The computer on the vehicle will then use the time it takes to receive the signal to determine the location of the obstacle. Other systems use bumper-mounted cameras or radar to detect obstacles. But the end result is the same: the car detects the parked car, the size of the parking space and the distance to the roadside, and drives the car into the parking space.
However, in the process of finding the parking space from far to near, the detection accuracy of the parking space detection model on the parking space is improved along with the reduction of the distance, and the parking space image acquired by the parking space detection model at different stages may have slight deviation on the positioning of the parking space.
Disclosure of Invention
The invention provides a parking space detection method, a parking space detection device, electronic equipment and a storage medium, which are used for solving or partially solving the technical problem that a fusion parking space is influenced by parking space images acquired at different stages and has low accuracy.
The invention provides a parking space detection method, which comprises the following steps:
acquiring a visual parking space;
matching the visual parking space with a preset parking space tracking list;
updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces;
and when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence coefficient is greater than a preset threshold value, outputting the fusion parking space.
Optionally, the step of obtaining a visual parking space includes:
detecting an initial visual parking space through a preset parking space detection model;
determining the size and the shape of the initial visual parking space;
and acquiring the initial visual parking space with the size and the shape meeting the preset parking space standard as the visual parking space.
Optionally, the first confidence includes a first matching confidence and a first non-matching confidence; before the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain the first confidence of the fusion parking space, the method further includes:
determining a sending stage of the visual parking space, and acquiring a second confidence coefficient of each angular point of the visual parking space;
obtaining a confidence coefficient attenuation rate corresponding to the sending stage;
acquiring the current confidence of each angular point of the fusion parking place;
the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking space includes:
when the matching result is that the matching parking spaces and the unmatched parking spaces exist in the parking space tracking list, calculating a first matching confidence coefficient of each angular point of the matching parking spaces by adopting the second confidence coefficient, the confidence coefficient attenuation rate and the current confidence coefficient;
and calculating a first unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
Optionally, the first confidence level further comprises a second unmatched confidence level; the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking space further includes:
and when the matching result is that only unmatched parking spaces exist in the parking space tracking list, calculating a second unmatched confidence coefficient of each corner point of the unmatched parking spaces by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
Optionally, after the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain the first confidence of the fusion parking space, the method further includes:
acquiring the current angular point position of the fusion parking space and the first angular point position of the visual parking space;
calculating the updating weight of the corner points of the matched parking spaces by adopting the first confidence coefficient and the second confidence coefficient of the corner points of the fusion parking spaces;
and calculating a second angular point position of the fusion parking space by adopting the updated weight, the current angular point position and the first angular point position.
Optionally, when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence degree is greater than a preset threshold value, the step of outputting the fusion parking space includes:
and when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence coefficient of each corner point of the fusion parking space is greater than a preset threshold value, outputting the fusion parking space and the second corner point position of the fusion parking space.
Optionally, the method further comprises:
and when the matching result is that only unmatched parking spaces exist in the parking space tracking list, adding the visual parking spaces to the parking space tracking list.
The invention also provides a parking space detection device, which comprises:
the visual parking space acquisition module is used for acquiring a visual parking space;
the matching module is used for matching the visual parking space with a preset parking space tracking list;
the first confidence coefficient updating module is used for updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces;
and the fusion parking space output module is used for outputting the fusion parking space when the distance between the vehicle and the fusion parking space meets the preset distance condition and the first confidence coefficient is greater than the preset threshold value.
Optionally, the first confidence includes a first matching confidence and a first non-matching confidence; the device further comprises:
the second confidence coefficient determining module is used for determining the sending stage of the visual parking space and acquiring a second confidence coefficient of each angular point of the visual parking space;
the confidence coefficient attenuation rate determining module is used for acquiring the confidence coefficient attenuation rate corresponding to the sending stage;
the current confidence coefficient determining module is used for acquiring the current confidence coefficient of each angular point of the fusion parking place;
the first confidence update module comprising:
a first matching confidence coefficient calculation submodule, configured to calculate a first matching confidence coefficient of each corner point of the matching parking space by using the second confidence coefficient, the confidence coefficient attenuation rate, and the current confidence coefficient when the matching result indicates that there are matching parking spaces and non-matching parking spaces in the parking space tracking list;
and the first unmatched confidence coefficient calculation submodule is used for calculating the first unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
Optionally, the first confidence level further comprises a second unmatched confidence level; the first confidence level updating module further comprises:
and the second unmatched confidence coefficient calculation submodule is used for calculating a second unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate when the matching result is that only the unmatched parking spaces exist in the parking space tracking list.
The invention also provides an electronic device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the parking space detection method according to an instruction in the program code.
The invention further provides a computer-readable storage medium for storing a program code for executing the parking space detection method.
According to the technical scheme, the invention has the following advantages: the visual parking stall is acquired; matching the visual parking spaces with a preset parking space tracking list; updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces; and outputting the fusion parking space with the first confidence coefficient larger than the preset threshold value. Thereby improve the reliability of fusing the parking stall, realize the accurate detection to the parking stall.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a parking space detection method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a parking space detection method according to another embodiment of the present invention;
fig. 3 is a flowchart illustrating a step of updating a first confidence level of a fusion parking space according to an embodiment of the present invention;
fig. 4 is a flowchart of a step of updating the corner position of each corner point of the fusion parking space according to the embodiment of the present invention;
fig. 5 is a block diagram of a parking space detection device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a parking space detection method and device, electronic equipment and a storage medium, which are used for solving or partially solving the technical problem that a fusion parking space is influenced by parking space images acquired at different stages and has low accuracy.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a parking space detection method according to an embodiment of the present invention.
The invention provides a parking space detection method, which specifically comprises the following steps:
in the embodiment of the invention, the visual parking space refers to a visual image containing the parking space, which is obtained by acquiring data of the environment around the vehicle through the vehicle-mounted detection device.
In the embodiment of the invention, the vehicle-mounted camera and other modes can be adopted for visual parking space detection, and the selection of the detection device is not particularly limited in the embodiment of the invention.
It should be noted that, in the process of finding a parking space, the detection devices selected for acquiring each frame of visual image in the same area are the same, and meanwhile, the parameters of the detection devices need to be kept unchanged in the process of acquiring the visual images.
102, matching the visual parking spaces with a preset parking space tracking list;
in the embodiment of the invention, a parking space tracking list needs to be maintained in advance, and the parking space tracking list records fusion data generated after visual parking space acquisition every time.
The visual parking space collected at each time is matched with the fusion parking space in the parking space tracking list, and the fusion data in the parking space tracking list can be adjusted, so that the detection deviation of the parking space position generated in the previous parking space fusion process is reduced.
confidence, also called reliability, or confidence level, confidence coefficient, i.e. when a sample estimates an overall parameter, its conclusion is always uncertain due to the randomness of the sample. Therefore, a probabilistic statement method, i.e. interval estimation in mathematical statistics, is used, i.e. how large the corresponding probability of the estimated value and the overall parameter are within a certain allowable error range, and this corresponding probability is called confidence.
Specifically, in the embodiment of the present invention, according to the matching result, the confidence level of the fusion parking space in the parking space tracking list may be updated, so as to obtain the first confidence level of the fusion parking space.
And 104, outputting the fusion parking space when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence coefficient is greater than a preset threshold value.
In the embodiment of the present invention, a confidence threshold may be preset, and when the distance between the vehicle and the fusion parking space meets the preset distance condition, if the first confidence of the fusion parking space in the parking space tracking list is greater than the confidence threshold, it represents that the fusion parking space can more accurately express the specific position of the parking space in the actual environment, and at this time, the fusion parking space may be output.
The visual parking stall is acquired; matching the visual parking spaces with a preset parking space tracking list; updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces; and outputting the fusion parking space with the first confidence coefficient larger than the preset threshold value. Thereby improve the reliability of fusing the parking stall, realize the accurate detection to the parking stall.
Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a parking space detection method according to another embodiment of the present invention, where the method specifically includes the following steps:
in the embodiment of the invention, the parking space detection model can be preset to detect the visual parking space.
After the initial visual parking space is detected, the size and the shape of the visual parking space need to be judged, and whether the visual parking space is a reasonable parking space or not is judged. For example, in one example, the position between adjacent parking spaces is easily judged as a parking space by mistake, and whether the position meets the requirement of a standard parking space or not can be judged by analyzing the size and the shape of the visual parking space, so that unreasonable parking spaces are eliminated, and the subsequent parking space fusion operation is executed only aiming at the reasonable parking spaces.
in the embodiment of the invention, a parking space tracking list needs to be maintained in advance, and the parking space tracking list records fusion data generated after visual parking space acquisition every time.
The visual parking space collected at each time is matched with the fusion parking space in the parking space tracking list, and the fusion data in the parking space tracking list can be adjusted, so that the detection deviation of the parking space position generated in the previous parking space fusion process is reduced.
in the embodiment of the invention, the confidence coefficient of the fusion parking space in the parking space tracking list can be updated according to the matching result, so that the first confidence coefficient of the fusion parking space is obtained.
In a specific example of the present invention, as shown in fig. 3, step 205 may further include the following sub-steps:
s11, determining a sending stage of the visual parking space, and acquiring a second confidence coefficient of each corner point of the visual parking space;
s12, obtaining the confidence coefficient attenuation rate corresponding to the sending stage;
s13, acquiring the current confidence of each corner point of the fusion parking space;
step 205 may specifically include the following sub-steps:
s14, when the matching result is that the matching parking spaces and the unmatched parking spaces exist in the parking space tracking list, calculating a first matching confidence coefficient of each corner point of the matching parking spaces by adopting a second confidence coefficient, a confidence coefficient attenuation rate and a current confidence coefficient;
and S15, calculating a first unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
In the embodiment of the invention, for the fusion parking spaces in different stages, corresponding attenuation strategies can be adopted, and different confidence coefficient attenuation rates are determined according to different stages. The specific confidence attenuation rate can be obtained through multiple test tests according to different factors such as the distance between the vehicle and the parking space and the accuracy of the parking space detection model, and the invention is not particularly limited to this.
In a specific implementation, after the visual parking space is collected, a second confidence coefficient of each angular point of the visual parking space can be calculated, and a current confidence coefficient of each angular point of each fusion parking space in the parking space tracking list can be obtained. And selecting an updating mode of the fusion parking space according to whether the fusion parking space in the parking space tracking list is matched with the visual parking space.
In one example, when the visual parking space is matched with a parking space in the parking space tracking list, the second confidence of the visual parking space, the current confidence of the fusion parking space and the confidence attenuation rate of the current stage may be used to update the first confidence of each corner point of the fusion parking space. The specific calculation process can be seen in the following formula:
ft=ft-1*αn+fnew
wherein f istFor fusing the first confidence of the angular points of the parking space, ft-1To merge the current confidence of the parking space angular points, fnewIs the second confidence, alpha, of the visual parking space angle pointnIs the confidence decay rate.
And for the fusion parking spaces which cannot be matched with the visual parking spaces in a matching mode, the second confidence coefficient and the confidence coefficient attenuation rate can be adopted to update the first confidence coefficient of each angular point of the fusion parking spaces. The specific calculation process can be seen in the following formula:
ft=ft-1*αn
in another example, the first confidence level further comprises a second unmatched confidence level; step 205 may further include:
and S16, when the matching result is that only unmatched parking spaces exist in the parking space tracking list, calculating a second unmatched confidence coefficient of each corner point of the unmatched parking spaces by adopting a second confidence coefficient and a confidence coefficient attenuation rate.
In specific implementation, when any one of the fusion parking spaces in the visual parking space and parking space tracking list cannot be matched, the formula f can be adoptedt=ft-1*αnAnd calculating a first confidence coefficient of each angular point of the fusion parking space.
And further, when the matching result is that only unmatched parking spaces exist in the parking space tracking list, the visual parking spaces are added to the parking space tracking list.
In the embodiment of the invention, after the fusion parking spaces in the parking space tracking list are updated, the positions of the angular points of the fusion parking spaces can be updated. As shown in fig. 4, the specific update process may include the following sub-steps:
s21, acquiring a current corner position of the fusion parking space and a first corner position of the visual parking space;
s22, calculating the updating weight of the corner points of the matched parking spaces by adopting the first confidence coefficient and the second confidence coefficient of the corner points of the fused parking spaces;
and S23, calculating a second corner position of the fusion parking space by adopting the updated weight, the current corner position and the first corner position.
In specific implementation, the update weight of the fusion parking space can be calculated according to the following formula:
w=ft/(ft+fnew)
and w is the updating weight of the fused parking space angular points.
It should be noted that, when the fusion parking space is not matched with the visual parking space, fnewTake 0 and update weight to 1.
Further, according to the updated weight, the position of the second angular point of the fusion parking space can be calculated based on the following formula:
pt=pt-1*w+pnew*(1-w)
wherein p istFor a second angular point position, p, merging with the angular points of the parking spacesnewFor visual parkingA first corner position of a corner; p is a radical oft-1The current angular point position of the angular point of the parking space is fused.
It should be noted that when the fusion parking space is not matched with the visual parking space, pnewTake 0.
And step 206, outputting second angle point positions of the fusion parking space and the fusion parking space when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence of each angle point of the fusion parking space is greater than a preset threshold value.
In the embodiment of the present invention, a confidence threshold may be preset, and when the distance between the vehicle and the fusion parking space meets the preset distance condition, if the first confidence of the fusion parking space in the parking space tracking list is greater than the confidence threshold, it represents that the fusion parking space can more accurately express the specific position of the parking space in the actual environment, and at this time, the fusion parking space may be output.
The visual parking stall is acquired; matching the visual parking spaces with a preset parking space tracking list; updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces; and outputting the fusion parking space with the first confidence coefficient larger than the preset threshold value. Thereby improve the reliability of fusing the parking stall, realize the accurate detection to the parking stall.
Referring to fig. 5, fig. 5 is a block diagram of a parking space detection device according to an embodiment of the present invention.
The embodiment of the invention provides a parking space detection device, which comprises:
a visual parking space acquisition module 501, configured to acquire a visual parking space;
a matching module 502, configured to match a visual parking space with a preset parking space tracking list;
a first confidence updating module 503, configured to update the fusion parking spaces in the parking space tracking list according to the matching result, so as to obtain a first confidence of the fusion parking spaces;
and the fusion parking space output module 504 is configured to output the fusion parking space when the distance between the vehicle and the fusion parking space meets the preset distance condition and the first confidence degree is greater than the preset threshold value.
In the embodiment of the present invention, the visual parking space obtaining module 501 includes:
the system comprises an initial vision parking space detection submodule and a parking space detection submodule, wherein the initial vision parking space detection submodule is used for detecting an initial vision parking space through a preset parking space detection model;
the size and shape determining submodule is used for determining the size and shape of the initial visual parking space;
and the visual parking space determining submodule is used for acquiring an initial visual parking space with the size and the shape meeting the preset parking space standard as the visual parking space.
In the embodiment of the present invention, the method further includes:
the second confidence coefficient determining module is used for determining the sending stage of the visual parking space and acquiring a second confidence coefficient of each angular point of the visual parking space;
the confidence coefficient attenuation rate determining module is used for acquiring the confidence coefficient attenuation rate corresponding to the sending stage;
the current confidence coefficient determining module is used for acquiring the current confidence coefficient of each angular point of the fusion parking place;
a first confidence update module 503 comprising:
the first matching confidence coefficient calculation submodule is used for calculating a first matching confidence coefficient of each angular point of the matched parking space by adopting a second confidence coefficient, a confidence coefficient attenuation rate and a current confidence coefficient when the matching result is that the matched parking space and the unmatched parking space exist in the parking space tracking list;
and the first unmatched confidence coefficient calculation submodule is used for calculating the first unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
In this embodiment of the present invention, the first confidence updating module 503 further includes:
and the second unmatched confidence coefficient calculation submodule is used for calculating a second unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate when the matching result is that only the unmatched parking spaces exist in the parking space tracking list.
In the embodiment of the present invention, the method further includes:
the current angular point position and first angular point position acquisition submodule is used for acquiring a current angular point position of a fusion parking space and a first angular point position of a visual parking space;
the updating weight calculation submodule is used for calculating the updating weight of the angular point of the matched parking space by adopting the first confidence coefficient and the second confidence coefficient of the angular point of the fused parking space;
and the second angular point position calculation submodule is used for calculating a second angular point position of the fusion parking space by adopting the updated weight, the current angular point position and the first angular point position.
In the embodiment of the present invention, the fusion parking space output module 504 includes:
and the fusion parking space output submodule is used for outputting the second angular point positions of the fusion parking space and the fusion parking space when the distance between the vehicle and the fusion parking space meets the preset distance condition and the first confidence coefficient of each angular point of the fusion parking space is greater than the preset threshold value.
In the embodiment of the present invention, the method further includes:
and the adding module is used for adding the visual parking spaces to the parking space tracking list when the matching result is that only unmatched parking spaces exist in the parking space tracking list.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (12)
1. A parking space detection method is characterized by comprising the following steps:
acquiring a visual parking space;
matching the visual parking space with a preset parking space tracking list;
updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces;
and when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence coefficient is greater than a preset threshold value, outputting the fusion parking space.
2. The method of claim 1, wherein the step of obtaining a visual space comprises:
detecting an initial visual parking space through a preset parking space detection model;
determining the size and the shape of the initial visual parking space;
and acquiring the initial visual parking space with the size and the shape meeting the preset parking space standard as the visual parking space.
3. The method of claim 1, wherein the first confidence level comprises a first match confidence level and a first no match confidence level; before the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain the first confidence of the fusion parking space, the method further includes:
determining a sending stage of the visual parking space, and acquiring a second confidence coefficient of each angular point of the visual parking space;
obtaining a confidence coefficient attenuation rate corresponding to the sending stage;
acquiring the current confidence of each angular point of the fusion parking place;
the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking space includes:
when the matching result is that the matching parking spaces and the unmatched parking spaces exist in the parking space tracking list, calculating a first matching confidence coefficient of each angular point of the matching parking spaces by adopting the second confidence coefficient, the confidence coefficient attenuation rate and the current confidence coefficient;
and calculating a first unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
4. The method of claim 3, wherein the first confidence level further comprises a second unmatched confidence level; the step of updating the fusion parking space in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking space further includes:
and when the matching result is that only unmatched parking spaces exist in the parking space tracking list, calculating a second unmatched confidence coefficient of each corner point of the unmatched parking spaces by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
5. The method according to claim 4, wherein after the step of updating the merged parking space in the parking space tracking list according to the matching result to obtain the first confidence of the merged parking space, the method further comprises:
acquiring the current angular point position of the fusion parking space and the first angular point position of the visual parking space;
calculating the updating weight of the corner points of the matched parking spaces by adopting the first confidence coefficient and the second confidence coefficient of the corner points of the fusion parking spaces;
and calculating a second angular point position of the fusion parking space by adopting the updated weight, the current angular point position and the first angular point position.
6. The method according to claim 5, wherein the step of outputting the fusion parking space when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence degree is greater than a preset threshold value comprises:
and when the distance between the vehicle and the fusion parking space meets a preset distance condition and the first confidence coefficient of each corner point of the fusion parking space is greater than a preset threshold value, outputting the fusion parking space and the second corner point position of the fusion parking space.
7. The method of claim 3, further comprising:
and when the matching result is that only unmatched parking spaces exist in the parking space tracking list, adding the visual parking spaces to the parking space tracking list.
8. The utility model provides a parking stall detection device which characterized in that includes:
the visual parking space acquisition module is used for acquiring a visual parking space;
the matching module is used for matching the visual parking space with a preset parking space tracking list;
the first confidence coefficient updating module is used for updating the fusion parking spaces in the parking space tracking list according to the matching result to obtain a first confidence coefficient of the fusion parking spaces;
and the fusion parking space output module is used for outputting the fusion parking space when the distance between the vehicle and the fusion parking space meets the preset distance condition and the first confidence coefficient is greater than the preset threshold value.
9. The apparatus of claim 8, wherein the first confidence level comprises a first match confidence level and a first no match confidence level; the device further comprises:
the second confidence coefficient determining module is used for determining the sending stage of the visual parking space and acquiring a second confidence coefficient of each angular point of the visual parking space;
the confidence coefficient attenuation rate determining module is used for acquiring the confidence coefficient attenuation rate corresponding to the sending stage;
the current confidence coefficient determining module is used for acquiring the current confidence coefficient of each angular point of the fusion parking place;
the first confidence update module comprising:
a first matching confidence coefficient calculation submodule, configured to calculate a first matching confidence coefficient of each corner point of the matching parking space by using the second confidence coefficient, the confidence coefficient attenuation rate, and the current confidence coefficient when the matching result indicates that there are matching parking spaces and non-matching parking spaces in the parking space tracking list;
and the first unmatched confidence coefficient calculation submodule is used for calculating the first unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate.
10. The apparatus of claim 9, wherein the first confidence level further comprises a second unmatched confidence level; the first confidence level updating module further comprises:
and the second unmatched confidence coefficient calculation submodule is used for calculating a second unmatched confidence coefficient of each corner point of the unmatched parking space by adopting the second confidence coefficient and the confidence coefficient attenuation rate when the matching result is that only the unmatched parking spaces exist in the parking space tracking list.
11. An electronic device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the parking space detection method according to any one of claims 1 to 7 according to instructions in the program code.
12. A computer-readable storage medium for storing program code for performing the parking space detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011495266.XA CN112560689B (en) | 2020-12-17 | 2020-12-17 | Parking space detection method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011495266.XA CN112560689B (en) | 2020-12-17 | 2020-12-17 | Parking space detection method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560689A true CN112560689A (en) | 2021-03-26 |
CN112560689B CN112560689B (en) | 2024-04-19 |
Family
ID=75062986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011495266.XA Active CN112560689B (en) | 2020-12-17 | 2020-12-17 | Parking space detection method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560689B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113253299A (en) * | 2021-06-09 | 2021-08-13 | 深圳市速腾聚创科技有限公司 | Obstacle detection method, obstacle detection device and storage medium |
US11624831B2 (en) | 2021-06-09 | 2023-04-11 | Suteng Innovation Technology Co., Ltd. | Obstacle detection method and apparatus and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740521A (en) * | 2018-12-29 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | The parking stall location determining method and device of automatic parking, electronic equipment and computer-readable medium |
CN110775052A (en) * | 2019-08-29 | 2020-02-11 | 浙江零跑科技有限公司 | Automatic parking method based on fusion of vision and ultrasonic perception |
CN110861639A (en) * | 2019-11-28 | 2020-03-06 | 安徽江淮汽车集团股份有限公司 | Parking information fusion method and device, electronic equipment and storage medium |
CN110969655A (en) * | 2019-10-24 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Method, device, equipment, storage medium and vehicle for detecting parking space |
WO2020238284A1 (en) * | 2019-05-29 | 2020-12-03 | 北京市商汤科技开发有限公司 | Parking space detection method and apparatus, and electronic device |
-
2020
- 2020-12-17 CN CN202011495266.XA patent/CN112560689B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740521A (en) * | 2018-12-29 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | The parking stall location determining method and device of automatic parking, electronic equipment and computer-readable medium |
WO2020238284A1 (en) * | 2019-05-29 | 2020-12-03 | 北京市商汤科技开发有限公司 | Parking space detection method and apparatus, and electronic device |
CN110775052A (en) * | 2019-08-29 | 2020-02-11 | 浙江零跑科技有限公司 | Automatic parking method based on fusion of vision and ultrasonic perception |
CN110969655A (en) * | 2019-10-24 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Method, device, equipment, storage medium and vehicle for detecting parking space |
CN110861639A (en) * | 2019-11-28 | 2020-03-06 | 安徽江淮汽车集团股份有限公司 | Parking information fusion method and device, electronic equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113253299A (en) * | 2021-06-09 | 2021-08-13 | 深圳市速腾聚创科技有限公司 | Obstacle detection method, obstacle detection device and storage medium |
CN113253299B (en) * | 2021-06-09 | 2022-02-01 | 深圳市速腾聚创科技有限公司 | Obstacle detection method, obstacle detection device and storage medium |
US11624831B2 (en) | 2021-06-09 | 2023-04-11 | Suteng Innovation Technology Co., Ltd. | Obstacle detection method and apparatus and storage medium |
US11927672B2 (en) | 2021-06-09 | 2024-03-12 | Suteng Innovation Technology Co., Ltd. | Obstacle detection method and apparatus and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112560689B (en) | 2024-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6718259B1 (en) | Adaptive Kalman filter method for accurate estimation of forward path geometry of an automobile | |
KR102126670B1 (en) | Apparatus and method for tracking objects with optimizing region of interest | |
CN111932901B (en) | Road vehicle tracking detection apparatus, method and storage medium | |
JP2020515816A (en) | GNSS receiver protection level | |
CN112560689B (en) | Parking space detection method and device, electronic equipment and storage medium | |
CN110341621B (en) | Obstacle detection method and device | |
CN105172713A (en) | Parking zone recognizing apparatus and control method thereof | |
CN113743228B (en) | Obstacle existence detection method and device based on multi-data fusion result | |
CN111784730B (en) | Object tracking method and device, electronic equipment and storage medium | |
JP4900611B2 (en) | Point-like target detection and tracking method in optoelectronic-based monitoring system | |
CN110426714B (en) | Obstacle identification method | |
CN111192327A (en) | Method and apparatus for determining obstacle orientation | |
GB2588654A (en) | Method and system for determining a state of traffic lights | |
KR20230095951A (en) | Map validation method | |
CN112863242B (en) | Parking space detection method and device | |
CN112070802B (en) | Target tracking method, device, equipment and computer readable storage medium | |
CN111295566B (en) | Object recognition device and object recognition method | |
CN115249407B (en) | Indicator light state identification method and device, electronic equipment, storage medium and product | |
EP3288260A1 (en) | Image processing device, imaging device, equipment control system, equipment, image processing method, and carrier means | |
CN113325415A (en) | Fusion method and system for vehicle radar data and camera data | |
CN115352455B (en) | Road characteristic prediction method and device, storage medium and electronic device | |
US20230108806A1 (en) | Method for radar detection of targets | |
US20200408517A1 (en) | Method, Computer Program And Device For Determining A Vehicle Spacing For An Observation Period | |
KR20160006527A (en) | Parking assist system and method | |
CN112651309A (en) | Parking space number acquisition method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |