CN112633152B - Parking space detection method and device, computer equipment and storage medium - Google Patents

Parking space detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112633152B
CN112633152B CN202011528438.9A CN202011528438A CN112633152B CN 112633152 B CN112633152 B CN 112633152B CN 202011528438 A CN202011528438 A CN 202011528438A CN 112633152 B CN112633152 B CN 112633152B
Authority
CN
China
Prior art keywords
parking space
angular points
target
radar
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011528438.9A
Other languages
Chinese (zh)
Other versions
CN112633152A (en
Inventor
胡子豪
刘国清
敖争光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youjia Innovation Technology Co.,Ltd.
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202011528438.9A priority Critical patent/CN112633152B/en
Publication of CN112633152A publication Critical patent/CN112633152A/en
Application granted granted Critical
Publication of CN112633152B publication Critical patent/CN112633152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a parking space detection method, a parking space detection device, computer equipment and a storage medium. The method comprises the following steps: acquiring ultrasonic radar data and a visual overlook image; radar data processing is carried out on ultrasonic radar data to obtain spatial parking space angular points, and image recognition processing is carried out on visual overlook images to obtain visual parking space angular points; selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points; according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points; and generating a target parking space according to the angular point of the target parking space. By adopting the method, the parking space detection accuracy can be improved.

Description

Parking space detection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of parking space detection technologies, and in particular, to a parking space detection method, apparatus, computer device, and storage medium.
Background
With the development of computer technology, parking space detection technology has emerged. The parking space detection technology is a technology for automatically detecting a parking space and providing a driver with an auxiliary function such as a parking instruction or direction control when a vehicle parks. In the conventional technology, a parking space is usually detected by a simple ultrasonic-based parking space detection method or a simple visual-based parking space detection method. However, the two parking space detection methods are limited in applicable scenes and low in parking space detection accuracy.
Disclosure of Invention
Therefore, it is necessary to provide a parking space detection method, a parking space detection device, a computer device, and a storage medium, which can improve the parking space detection accuracy.
A parking space detection method, the method comprising:
acquiring ultrasonic radar data and a visual overlook image;
performing radar data processing on the ultrasonic radar data to obtain spatial parking space angular points, and performing image identification processing on the visual overlook images to obtain visual parking space angular points;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
according to the target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points;
and generating a target parking space according to the target parking space angular point.
In one embodiment, the step of acquiring the visual overhead image includes:
acquiring all around-looking fisheye images acquired by all fisheye cameras around a vehicle;
and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all the all-round looking fisheye image to obtain a corresponding visual overlook image.
In one embodiment, the performing radar data processing on the ultrasonic radar data to obtain a spatial parking space corner point includes:
converting the ultrasonic radar data into radar coordinates relative to the center of the vehicle according to the installation parameters of the ultrasonic radar;
when the radar abscissa value of the radar coordinate is smaller than a first threshold value, obtaining the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a first sliding window;
determining a first mean value of the radar abscissa values within the first sliding window and a first deviation value between each radar abscissa value and the first mean value;
when a first deviation value between the radar abscissa value and the first mean value is larger than a second threshold value, searching the radar abscissa value from back to front in the first sliding window according to the time sequence, completing the search when the first deviation value between the first target radar abscissa value and the first mean value is smaller than a third threshold value, and recording a first radar coordinate and a first global position coordinate corresponding to the first target radar abscissa value;
determining a second average value of radar abscissa values corresponding to the ultrasonic radar data;
when a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, acquiring the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a second sliding window;
determining a third mean value of the radar abscissa values in the second sliding window;
when a third deviation value between the radar abscissa value and the third mean value is smaller than a fourth threshold value, searching radar abscissa values from front to back in the second sliding window according to the time sequence, completing searching when a third deviation value between a second target radar abscissa value and the third mean value is smaller than the third threshold value, and recording a second radar coordinate and a second global position coordinate corresponding to the second target radar abscissa value;
and generating a space parking space angular point according to the first radar coordinate, the first global position coordinate, the second average value, the second radar coordinate and the second global position coordinate.
In one embodiment, the selecting a corresponding target fusion policy from preset candidate fusion policies according to the number of the visual parking space corner points includes:
when the number of the visual parking space angular points is at least three, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
according to the target fusion strategy, the spatial parking space angular points and the visual parking space angular points are fused to obtain target parking space angular points, and the method comprises the following steps:
dividing the parking space image formed by the visual parking space angular points into unit grids with preset sizes; the attributes of the cells comprise an occupation attribute and a null attribute;
converting radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image;
when the attribute of the cell in which the pixel point coordinate is located is a null attribute, modifying the attribute of the cell into an occupation attribute;
when the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, sequentially searching all the cells in the parking space image according to the sequence of the virtual vertical coordinates of the cells from small to large to obtain a cell area of a maximum continuous string formed by the cells with empty attributes;
and generating a target parking space angular point according to the cell area.
In one embodiment, the generating a target parking space angle point according to the cell region includes:
converting the virtual coordinates of each cell in the cell area into real coordinates relative to the center of the vehicle; the converted cell size is the real size;
and generating a target parking space angular point according to the real coordinates and the real sizes of the cells and the number of the cells in the cell area.
In one embodiment, the selecting a corresponding target fusion policy from preset candidate fusion policies according to the number of the visual parking space corner points includes:
when the number of the visual parking space angular points is two, and a line segment formed by the two visual parking space angular points is the vertical direction of the vehicle driving direction, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
according to the target fusion strategy, the spatial parking space angular points and the visual parking space angular points are fused to obtain target parking space angular points, and the method comprises the following steps:
screening two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle;
combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points;
and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
In one embodiment, the selecting a corresponding target fusion policy from preset candidate fusion policies according to the number of the visual parking space corner points includes:
when the number of the visual parking space angular points is one or zero, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
according to the target fusion strategy, the spatial parking space angular points and the visual parking space angular points are fused to obtain target parking space angular points, and the method comprises the following steps:
and taking the space parking space angular point as a target parking space angular point.
A parking space detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring ultrasonic radar data and a visual overlook image;
the processing module is used for performing radar data processing on the ultrasonic radar data to obtain a spatial parking space angular point, and performing image identification processing on the visual overlook image to obtain a visual parking space angular point;
the selection module is used for selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
the fusion module is used for fusing the spatial parking space angular points and the visual parking space angular points according to the target fusion strategy to obtain target parking space angular points;
and the generating module is used for generating a target parking space according to the target parking space angular point.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring ultrasonic radar data and a visual overlook image;
performing radar data processing on the ultrasonic radar data to obtain spatial parking space angular points, and performing image identification processing on the visual overlook images to obtain visual parking space angular points;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
according to the target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points;
and generating a target parking space according to the target parking space angular point.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring ultrasonic radar data and a visual overlook image;
performing radar data processing on the ultrasonic radar data to obtain spatial parking space angular points, and performing image identification processing on the visual overlook images to obtain visual parking space angular points;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
according to the target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points;
and generating a target parking space according to the target parking space angular point.
According to the parking space detection method, the parking space detection device, the computer equipment and the storage medium, the ultrasonic radar data and the visual overlook image are obtained; radar data processing is carried out on ultrasonic radar data to obtain spatial parking space angular points, and image recognition processing is carried out on visual overlook images to obtain visual parking space angular points; selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points; according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points; and generating a target parking space according to the angular point of the target parking space. Therefore, the spatial parking space angular points and the visual parking space angular points are fused to detect the parking spaces by adopting a target fusion strategy corresponding to the number of the visual parking space angular points. Compared with the traditional parking space detection method based on ultrasonic waves or vision, the parking space detection method based on vision is wider in applicable scene and improves the parking space detection accuracy.
Drawings
Fig. 1 is a view of an application scenario of a parking space detection method in an embodiment;
FIG. 2 is a schematic flowchart illustrating a parking space detection method according to an embodiment;
FIG. 3 is a schematic diagram of an embodiment of a parking space detection process;
FIG. 4 is a schematic diagram of a parking space detection process according to another embodiment;
fig. 5 is a block diagram showing the structure of a parking space detection apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The parking space detection method provided by the application can be applied to the application environment shown in fig. 1. The application environment includes an ultrasonic radar 102, a camera 104 and a parking space detection terminal 106. The ultrasonic radar 102 communicates with the parking space detection terminal 106 through a network, and the camera 104 communicates with the parking space detection terminal 106 through the network. Those skilled in the art will understand that the application environment shown in fig. 1 is only a part of the scenario related to the present application, and does not constitute a limitation to the application environment of the present application.
The parking space detection terminal 106 acquires the ultrasonic radar data from the ultrasonic radar 102 and the visual overhead image from the camera 104. The parking space detection terminal 106 performs radar data processing on the ultrasonic radar data to obtain a spatial parking space angular point, and performs image recognition processing on the visual overlook image to obtain a visual parking space angular point. And the parking space detection terminal 106 selects a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points, and performs fusion processing on the spatial parking space angular points and the visual parking space angular points according to the target fusion strategy to obtain target parking space angular points. And the parking space detection terminal 106 generates a target parking space according to the target parking space angular point.
In an embodiment, as shown in fig. 2, a parking space detection method is provided, which is described by taking the method as an example applied to the parking space detection terminal 106 in fig. 1, and includes the following steps:
s202, ultrasonic radar data and visual overlook images are obtained.
Specifically, ultrasonic radars are installed on two sides of a front bumper of the vehicle, and cameras are installed around the vehicle. In the vehicle driving process, the ultrasonic radar can scan obstacles around the vehicle and acquire corresponding ultrasonic radar data, and meanwhile, the camera can acquire images around the vehicle. Furthermore, the parking space detection terminal can acquire ultrasonic radar data and visual overlook images.
And S204, performing radar data processing on the ultrasonic radar data to obtain a space parking space angular point, and performing image recognition processing on the visual overlook image to obtain a visual parking space angular point.
Specifically, after the parking space detection terminal acquires the ultrasonic radar data and the visual overlook image, radar data processing can be performed on the ultrasonic radar data to obtain a spatial parking space angular point, meanwhile, the parking space detection terminal can input the visual overlook image to an image recognition model to perform image recognition processing, and the visual parking space angular point is obtained through output.
And S206, selecting a corresponding target fusion strategy from the preset candidate fusion strategies according to the number of the visual parking space corner points.
Specifically, the number of the obtained visual parking space angle points may be different due to the inconsistent degree of definition or the inconsistent degree of shielding of the parking space boundary line of each parking space. For example, the number of visual parking point points may be four, three, two, one, or zero. Candidate fusion strategies corresponding to the number of visual parking space angular points respectively can be stored in the parking space detection terminal. The parking space detection terminal can select a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the obtained visual parking space angular points.
And S208, according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points.
Specifically, the parking space detection terminal can perform fusion processing on the spatial parking space angular points and the visual parking space angular points according to a target fusion strategy to obtain target parking space angular points. It can be understood that the number of the target parking space angular points is four.
And S210, generating a target parking space according to the corner of the target parking space.
Specifically, four parking space boundary lines can be determined by the four target parking space angular points, and the parking space detection terminal can generate a target parking space according to the four parking space boundary lines determined by the target parking space angular points.
In the parking space detection method, ultrasonic radar data and visual overlook images are obtained; radar data processing is carried out on ultrasonic radar data to obtain spatial parking space angular points, and image recognition processing is carried out on visual overlook images to obtain visual parking space angular points; selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points; according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points; and generating a target parking space according to the angular point of the target parking space. Therefore, the spatial parking space angular points and the visual parking space angular points are fused to detect the parking spaces by adopting a target fusion strategy corresponding to the number of the visual parking space angular points. Compared with the traditional parking space detection method based on ultrasonic waves or vision, the parking space detection method based on vision is wider in applicable scene and improves the parking space detection accuracy.
In one embodiment, the step of acquiring the visual overhead image specifically includes: acquiring all around-looking fisheye images acquired by all fisheye cameras around a vehicle; and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all-round-looking fisheye image to obtain a corresponding visual overlook image.
Specifically, a plurality of fisheye cameras can be installed around the vehicle, and the fisheye cameras can collect the around-looking fisheye images. The parking space detection terminal can acquire all-around fisheye images acquired by all fisheye cameras around the vehicle from all the fisheye cameras, and perform distortion removal processing, splicing processing and perspective transformation processing on all the all-around fisheye images to acquire corresponding visual overlook images.
In one embodiment, four fisheye cameras may be mounted around the vehicle and mounted in front of, behind, to the left of, and to the right of the vehicle, respectively. The parking space detection terminal can acquire the all-round-looking fisheye image acquired by the fisheye cameras around the vehicle from the four fisheye cameras.
In the embodiment, the distortion removal processing, the splicing processing and the perspective transformation processing are carried out on all the all-around looking fisheye images, so that the obtained visual overlooking images are clearer, and the parking space detection accuracy is further improved.
In an embodiment, step S204, that is, the step of performing radar data processing on the ultrasonic radar data to obtain the spatial parking space corner point includes: converting the ultrasonic radar data into radar coordinates relative to the center of the vehicle according to the installation parameters of the ultrasonic radar; when the radar abscissa value of the radar coordinate is smaller than a first threshold value, obtaining the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a first sliding window; determining a first average value of radar abscissa values in a first sliding window and a first deviation value between each radar abscissa value and the first average value; when a first deviation value between the radar abscissa value and the first mean value is larger than a second threshold value, searching the radar abscissa value from back to front in a first sliding window according to the time sequence, completing searching when the first deviation value between the first target radar abscissa value and the first mean value is smaller than a third threshold value, and recording a first radar coordinate and a first global position coordinate corresponding to the first target radar abscissa value; determining a second average value of radar abscissa values corresponding to the ultrasonic radar data; when a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, acquiring the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a second sliding window; determining a third mean value of the radar abscissa values in the second sliding window; when a third deviation value between the radar abscissa value and a third mean value is smaller than a fourth threshold value, searching the radar abscissa value from front to back in a second sliding window according to the time sequence, completing the search when the third deviation value between the second target radar abscissa value and the third mean value is smaller than the third threshold value, and recording a second radar coordinate and a second global position coordinate corresponding to the second target radar abscissa value; and generating a spatial parking space angular point according to the first radar coordinate, the first global position coordinate, the second mean value, the second radar coordinate and the second global position coordinate.
Specifically, a coordinate system is established with the center of the vehicle as a coordinate origin, the driving direction of the vehicle is taken as the direction of the ordinate axis x, and the direction from the vehicle to the obstacle is taken as the direction of the abscissa axis y, and the installation parameters of the ultrasonic radar may include x _ offset, y _ offset, and an angle alpha between the ultrasonic radar and the horizontal direction. The parking space detection terminal can convert ultrasonic radar Data range into radar coordinates (Data _ x, Data _ y) relative to the center of the vehicle according to the installation parameters of the ultrasonic radar. The radar ordinate Data _ x ═ range cos (alpha) + x _ offset, and the radar abscissa Data _ y ═ range sin (alpha) + y _ offset. When the radar abscissa value Data _ y of the radar coordinate is smaller than the first threshold value, it indicates that the vehicle is closer to the first boundary vehicle, and at this time, the parking space detection terminal may obtain a current global position coordinate (position.x, position.y) of the vehicle, where the global position coordinate is a coordinate relative to the world coordinate system. The parking space detection terminal can store the current global position coordinate and the corresponding current radar coordinate into the first sliding window, and determine a first average value d1 of radar abscissa values in the first sliding window and a first deviation value between each radar abscissa value and the first average value. When the first deviation value between the radar abscissa value and the first mean value is larger than the second threshold value, the fact that the vehicle just drives beside an empty parking space is indicated, namely the vehicle just drives beside a first boundary vehicle, at the moment, the parking space detection terminal can search the radar abscissa values from back to front according to the time sequence in the first sliding window, the search is completed when the first deviation value between the first target radar abscissa value and the first mean value is smaller than the third threshold value, and the first radar coordinate (Data _ x1, Data _ y1) and the first global position coordinate (init _ position.x, init _ position.y) corresponding to the first target radar abscissa value are recorded. The parking space detection terminal can determine a second average value d2 of the radar abscissa value corresponding to the ultrasonic radar data. When a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, the fact that the vehicle just drives beside the second boundary vehicle is shown, namely the vehicle just drives away beside an empty parking space, at the moment, the parking space detection terminal can obtain the current global position coordinate of the vehicle, and the current global position coordinate and the corresponding current radar coordinate are stored in a second sliding window. The parking space detection terminal can determine a third mean value d3 of the radar abscissa value in the second sliding window. When a third deviation value between the radar abscissa value and the third mean value is smaller than a fourth threshold value, it is indicated that the vehicle is still beside the second boundary vehicle but is about to drive away from the second boundary vehicle, at this time, the parking space detection terminal may search for the radar abscissa value from front to back in the second sliding window according to the time sequence, complete the search when the third deviation value between the second target radar abscissa value and the third mean value is smaller than the third threshold value, and record a second radar coordinate (Data _ x2, Data _ y2) and a second global position coordinate (end _ position.x, end _ position.y) corresponding to the second target radar abscissa value. Furthermore, the parking space detection terminal may generate spatial parking space angular points A, B, C and D according to the first radar coordinate, the first global position coordinate, the second mean value, the second radar coordinate, and the second global position coordinate. It is understood that the parking space detection terminal may directly use the second average value d2 as the depth of the empty parking space. Spatial corner points A, B, C and D may be expressed as follows, A: (init _ dose.x + Data _ x1, init _ dose.y + Data _ y1), B: (init _ dose.x + Data _ x1, init _ dose.y + Data _ y1+ d2), C: (end _ dose.x + Data _ x2, end _ dose.y + Data _ y2+ D2), D: (end _ dose.x + Data _ x2, end _ dose.y + Data _ y 2).
In the embodiment, the radar coordinate and the corresponding vehicle global position coordinate of the first boundary vehicle closest to the empty parking space, and the radar coordinate and the corresponding vehicle global position coordinate of the second boundary vehicle closest to the empty parking space are respectively and accurately searched through the two sliding windows, so that the spatial parking space angular point is obtained, and the parking space detection accuracy is further improved.
In an embodiment, step S206, that is, the step of selecting a corresponding target fusion policy from the preset candidate fusion policies according to the number of the visual parking stall corner points, specifically includes: and when the number of the visual parking space angular points is at least three, selecting a corresponding target fusion strategy from preset candidate fusion strategies. Step S208, that is, according to the target fusion policy, performing fusion processing on the spatial parking space angular point and the visual parking space angular point to obtain a target parking space angular point, which specifically includes: dividing a parking space image formed by the visual parking space angular points into unit grids with preset sizes; the attributes of the cells comprise an occupation attribute and a null attribute; converting radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image; when the attribute of the cell in which the pixel point coordinates are located is a null attribute, modifying the attribute of the cell into an occupation attribute; when the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, sequentially searching all the cells in the parking space image according to the sequence of the virtual vertical coordinates of the cells from small to large to obtain a cell area of a maximum continuous string formed by the cells with empty attributes; and generating a target parking space angular point according to the cell area.
Specifically, when the number of the visual parking space angle points is at least three, that is, three or 4, the three or four visual parking space angle points may form a corresponding parking space image. The parking space detection terminal can divide parking space images formed by the visual parking space angular points into cells with preset sizes, wherein the attributes of the cells comprise an occupation attribute and an empty attribute. The occupancy attribute is used to indicate that the cell has been occupied by a boundary vehicle, and the null attribute is used to indicate that the cell is not occupied by a boundary vehicle. The parking space detection terminal can convert radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image. When the attribute of the cell where the pixel point coordinate is located is a null attribute, the attribute of the cell judged based on vision is wrong, at the moment, the parking space detection terminal can modify the attribute of the cell into an occupation attribute, and the occupation ratio between the cell with the occupation attribute and all cells in the parking space image is calculated. When the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, the probability that the corresponding empty parking space is the available parking space is high, at the moment, the parking space detection terminal can sequentially search all the cells in the parking space image according to the sequence from small to large of the virtual vertical coordinates of the cells to obtain the cell area of the maximum continuous string formed by the cells occupying the attributes, and the target parking space angular point is generated according to the cell area.
In one embodiment, the parking space detection terminal may generate a candidate parking space angular point according to the cell region, and calculate a size of a candidate empty parking space determined by the candidate parking space angular point. The parking space detection terminal can compare the size of the candidate empty parking space with the size of the reference empty parking space. And when the size of the candidate empty parking space is larger than that of the reference empty parking space, generating a target parking space angular point. And when the size of the candidate empty space is smaller than or equal to the size of the reference empty space, judging that the candidate empty space is unavailable.
In one embodiment, when the number of visual parking space angle points is four, as shown in fig. 3, the fused target parking space angle points are A, B, C and D in fig. 3. When the number of the visual parking space angle points is 3, as shown in fig. 4, the fused target parking space angle points are A, B, C and D points in fig. 4. The solid small black dots in fig. 3 and 4 are detected by the ultrasonic radar, i.e., radar coordinate points.
In the above embodiment, the ultrasonic radar data that obtains through the ultrasonic radar collection revises the parking stall that vision parking stall angular point constitutes, obtains target parking stall angular point, has further promoted parking stall detection accuracy.
In one embodiment, the step of generating the target parking space angle point according to the cell area specifically includes: converting the virtual coordinates of each cell in the cell area into real coordinates relative to the center of the vehicle; the converted cell size is the real size; and generating a target parking space angular point according to the real coordinates and the real sizes of the cells and the number of the cells in the cell area.
Specifically, the parking space detection terminal can convert the virtual coordinates of each cell in the cell area into real coordinates relative to the center of the vehicle, wherein the size of the cell before conversion is a preset size, and the size of the cell after conversion is a real size. The parking space detection terminal can generate a target parking space angular point according to the initial point coordinate searched first in the real coordinates of the cells, the real size and the number of the cells in the cell area.
In the embodiment, the target parking space angular point is generated through the real coordinates and the real size of the cell and the number of the cells in the cell area, so that the parking space detection accuracy is further improved.
In an embodiment, step S206, that is, the step of selecting a corresponding target fusion policy from the preset candidate fusion policies according to the number of the visual parking stall corner points, specifically includes: and when the number of the visual parking space angular points is two and a line segment formed by the two visual parking space angular points is the vertical direction of the vehicle driving direction, selecting a corresponding target fusion strategy from the preset candidate fusion strategies. Step S208, that is, according to the target fusion policy, performing fusion processing on the spatial parking space angular point and the visual parking space angular point to obtain a target parking space angular point, which specifically includes: screening two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle; combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points; and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
Specifically, when the number of the visual parking space angular points is two, and a line segment formed by the two visual parking space angular points is perpendicular to the driving direction of the vehicle, the parking space detection terminal can screen out two target space angular points from the space angular points, wherein the line segment formed by the screened two target space angular points is parallel to the driving direction of the vehicle. The parking space detection terminal can combine the two visual parking space angular points and the two target space parking space angular points to obtain four candidate parking space angular points, and calculates the width and height of a candidate parking space formed by the four candidate parking space angular points. The parking space detection terminal can compare the width and height of the candidate parking space formed by the four candidate parking space angular points with the preset width and height. And when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points. And when the width and the height of the candidate parking space formed by the four candidate parking space angular points are smaller than or preset, judging that the candidate parking space formed by the candidate parking space angular points is unavailable.
In the embodiment, the four candidate parking space angular points are obtained by combining the two visual parking space angular points and the two target space parking space angular points, so that the parking space detection method is wider in application range and higher in parking space detection accuracy.
In an embodiment, step S206, that is, the step of selecting a corresponding target fusion policy from the preset candidate fusion policies according to the number of the visual parking stall corner points, specifically includes: and when the number of the visual parking space angular points is one or zero, selecting a corresponding target fusion strategy from preset candidate fusion strategies. Step S208, that is, according to the target fusion policy, performing fusion processing on the spatial parking space angular point and the visual parking space angular point to obtain a target parking space angular point, which specifically includes: and taking the spatial parking space angular point as a target parking space angular point.
Specifically, when the number of the visual parking space angular points is one or zero, all parking space lines representing empty parking spaces are not clear or are shielded, and at the moment, the parking space detection terminal can directly take the spatial parking space angular points as target parking space angular points.
In the above embodiment, when the number of the visual parking space angular points is one or zero, the spatial parking space angular points are directly used as the target parking space angular points, and under the condition that the parking space lines are not clear or are shielded, empty parking spaces can be detected at will, so that the parking space detection method has a wider application range.
It should be understood that although the various steps of fig. 2 are shown in order, the steps are not necessarily performed in order. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a parking space detecting apparatus 500 including: an obtaining module 501, a processing module 502, a selecting module 503, a fusing module 504 and a generating module 505, wherein:
an obtaining module 501 is configured to obtain ultrasonic radar data and a visual overhead image.
The processing module 502 is configured to perform radar data processing on the ultrasonic radar data to obtain a spatial parking space angular point, and perform image recognition processing on the visual overlook image to obtain a visual parking space angular point.
The selecting module 503 is configured to select a corresponding target fusion policy from preset candidate fusion policies according to the number of the visual parking space corner points.
And a fusion module 504, configured to perform fusion processing on the spatial parking space angular points and the visual parking space angular points according to a target fusion policy to obtain target parking space angular points.
And a generating module 505, configured to generate a target parking space according to the target parking space angular point.
In one embodiment, the obtaining module 501 is further configured to obtain a fish-eye image around the vehicle for looking around the vehicle, which is acquired by each fish-eye camera; and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all-round-looking fisheye image to obtain a corresponding visual overlook image.
In one embodiment, 3, the method according to claim 1, wherein the performing radar data processing on the ultrasonic radar data to obtain a spatial parking space corner point comprises:
converting the ultrasonic radar data into radar coordinates relative to the center of the vehicle according to the installation parameters of the ultrasonic radar;
when the radar abscissa value of the radar coordinate is smaller than a first threshold value, obtaining the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a first sliding window;
determining a first average value of radar abscissa values in a first sliding window and a first deviation value between each radar abscissa value and the first average value;
when a first deviation value between the radar abscissa value and the first mean value is larger than a second threshold value, searching the radar abscissa value from back to front in a first sliding window according to the time sequence, completing searching when the first deviation value between the first target radar abscissa value and the first mean value is smaller than a third threshold value, and recording a first radar coordinate and a first global position coordinate corresponding to the first target radar abscissa value;
determining a second average value of radar abscissa values corresponding to the ultrasonic radar data;
when a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, acquiring the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a second sliding window;
determining a third mean value of the radar abscissa values in the second sliding window;
when a third deviation value between the radar abscissa value and a third mean value is smaller than a fourth threshold value, searching the radar abscissa value from front to back in a second sliding window according to the time sequence, completing the search when the third deviation value between the second target radar abscissa value and the third mean value is smaller than the third threshold value, and recording a second radar coordinate and a second global position coordinate corresponding to the second target radar abscissa value;
and generating a spatial parking space angular point according to the first radar coordinate, the first global position coordinate, the second mean value, the second radar coordinate and the second global position coordinate.
In one embodiment, the selection module 503 is further configured to select a corresponding target fusion policy from preset candidate fusion policies when the number of the visual parking space corner points is at least three; dividing a parking space image formed by the visual parking space angular points into unit grids with preset sizes; the attributes of the cells comprise an occupation attribute and a null attribute; converting radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image; when the attribute of the cell in which the pixel point coordinates are located is a null attribute, modifying the attribute of the cell into an occupation attribute; when the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, sequentially searching all the cells in the parking space image according to the sequence of the virtual vertical coordinates of the cells from small to large to obtain a cell area of a maximum continuous string formed by the cells with empty attributes; and generating a target parking space angular point according to the cell area.
In one embodiment, the selection module 503 is further configured to convert the virtual coordinates of each cell in the cell area to real coordinates relative to the center of the vehicle; the converted cell size is the real size; and generating a target parking space angular point according to the real coordinates and the real sizes of the cells and the number of the cells in the cell area.
In one embodiment, the selection module 503 is further configured to select a corresponding target fusion policy from preset candidate fusion policies when the number of the visual parking space angular points is two and a line segment formed by the two visual parking space angular points is a vertical direction of a vehicle driving direction; screening two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle; combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points; and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
In one embodiment, the selection module 503 is further configured to select a corresponding target fusion policy from preset candidate fusion policies when the number of the visual parking space corner points is one or zero; and taking the spatial parking space angular point as a target parking space angular point.
The parking space detection device acquires ultrasonic radar data and a visual overlook image; radar data processing is carried out on ultrasonic radar data to obtain spatial parking space angular points, and image recognition processing is carried out on visual overlook images to obtain visual parking space angular points; selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points; according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points; and generating a target parking space according to the angular point of the target parking space. Therefore, the spatial parking space angular points and the visual parking space angular points are fused to detect the parking spaces by adopting a target fusion strategy corresponding to the number of the visual parking space angular points. Compared with the traditional parking space detection method based on ultrasonic waves or vision, the parking space detection method based on vision is wider in applicable scene and improves the parking space detection accuracy.
For specific definition of the parking space detection device, reference may be made to the above definition of the parking space detection method, which is not described herein again. All or part of the modules in the parking space detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, and the computer device may be the parking space detection terminal 106 in fig. 1, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a parking space detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring ultrasonic radar data and a visual overlook image;
radar data processing is carried out on ultrasonic radar data to obtain spatial parking space angular points, and image recognition processing is carried out on visual overlook images to obtain visual parking space angular points;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points;
and generating a target parking space according to the angular point of the target parking space.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring all around-looking fisheye images acquired by all fisheye cameras around a vehicle;
and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all-round-looking fisheye image to obtain a corresponding visual overlook image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
converting the ultrasonic radar data into radar coordinates relative to the center of the vehicle according to the installation parameters of the ultrasonic radar;
when the radar abscissa value of the radar coordinate is smaller than a first threshold value, obtaining the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a first sliding window;
determining a first average value of radar abscissa values in a first sliding window and a first deviation value between each radar abscissa value and the first average value;
when a first deviation value between the radar abscissa value and the first mean value is larger than a second threshold value, searching the radar abscissa value from back to front in a first sliding window according to the time sequence, completing searching when the first deviation value between the first target radar abscissa value and the first mean value is smaller than a third threshold value, and recording a first radar coordinate and a first global position coordinate corresponding to the first target radar abscissa value;
determining a second average value of radar abscissa values corresponding to the ultrasonic radar data;
when a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, acquiring the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a second sliding window;
determining a third mean value of the radar abscissa values in the second sliding window;
when a third deviation value between the radar abscissa value and a third mean value is smaller than a fourth threshold value, searching the radar abscissa value from front to back in a second sliding window according to the time sequence, completing the search when the third deviation value between the second target radar abscissa value and the third mean value is smaller than the third threshold value, and recording a second radar coordinate and a second global position coordinate corresponding to the second target radar abscissa value;
and generating a spatial parking space angular point according to the first radar coordinate, the first global position coordinate, the second mean value, the second radar coordinate and the second global position coordinate.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the number of the visual parking space angular points is at least three, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
dividing a parking space image formed by the visual parking space angular points into unit grids with preset sizes; the attributes of the cells comprise an occupation attribute and a null attribute;
converting radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image;
when the attribute of the cell in which the pixel point coordinates are located is a null attribute, modifying the attribute of the cell into an occupation attribute;
when the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, sequentially searching all the cells in the parking space image according to the sequence of the virtual vertical coordinates of the cells from small to large to obtain a cell area of a maximum continuous string formed by the cells with empty attributes;
and generating a target parking space angular point according to the cell area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
converting the virtual coordinates of each cell in the cell area into real coordinates relative to the center of the vehicle; the converted cell size is the real size;
and generating a target parking space angular point according to the real coordinates and the real sizes of the cells and the number of the cells in the cell area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the number of the visual parking space angular points is two, and a line segment formed by the two visual parking space angular points is the vertical direction of the vehicle running direction, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
screening two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle;
combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points;
and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the number of the visual parking space angular points is one or zero, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
and taking the spatial parking space angular point as a target parking space angular point.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring ultrasonic radar data and a visual overlook image;
radar data processing is carried out on ultrasonic radar data to obtain spatial parking space angular points, and image recognition processing is carried out on visual overlook images to obtain visual parking space angular points;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
according to a target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points;
and generating a target parking space according to the angular point of the target parking space.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring all around-looking fisheye images acquired by all fisheye cameras around a vehicle;
and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all-round-looking fisheye image to obtain a corresponding visual overlook image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
converting the ultrasonic radar data into radar coordinates relative to the center of the vehicle according to the installation parameters of the ultrasonic radar;
when the radar abscissa value of the radar coordinate is smaller than a first threshold value, obtaining the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a first sliding window;
determining a first average value of radar abscissa values in a first sliding window and a first deviation value between each radar abscissa value and the first average value;
when a first deviation value between the radar abscissa value and the first mean value is larger than a second threshold value, searching the radar abscissa value from back to front in a first sliding window according to the time sequence, completing searching when the first deviation value between the first target radar abscissa value and the first mean value is smaller than a third threshold value, and recording a first radar coordinate and a first global position coordinate corresponding to the first target radar abscissa value;
determining a second average value of radar abscissa values corresponding to the ultrasonic radar data;
when a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, acquiring the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a second sliding window;
determining a third mean value of the radar abscissa values in the second sliding window;
when a third deviation value between the radar abscissa value and a third mean value is smaller than a fourth threshold value, searching the radar abscissa value from front to back in a second sliding window according to the time sequence, completing the search when the third deviation value between the second target radar abscissa value and the third mean value is smaller than the third threshold value, and recording a second radar coordinate and a second global position coordinate corresponding to the second target radar abscissa value;
and generating a spatial parking space angular point according to the first radar coordinate, the first global position coordinate, the second mean value, the second radar coordinate and the second global position coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the number of the visual parking space angular points is at least three, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
dividing a parking space image formed by the visual parking space angular points into unit grids with preset sizes; the attributes of the cells comprise an occupation attribute and a null attribute;
converting radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image;
when the attribute of the cell in which the pixel point coordinates are located is a null attribute, modifying the attribute of the cell into an occupation attribute;
when the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, sequentially searching all the cells in the parking space image according to the sequence of the virtual vertical coordinates of the cells from small to large to obtain a cell area of a maximum continuous string formed by the cells with empty attributes;
and generating a target parking space angular point according to the cell area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
converting the virtual coordinates of each cell in the cell area into real coordinates relative to the center of the vehicle; the converted cell size is the real size;
and generating a target parking space angular point according to the real coordinates and the real sizes of the cells and the number of the cells in the cell area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the number of the visual parking space angular points is two, and a line segment formed by the two visual parking space angular points is the vertical direction of the vehicle running direction, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
screening two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle;
combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points;
and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the number of the visual parking space angular points is one or zero, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
and taking the spatial parking space angular point as a target parking space angular point.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A parking space detection method, characterized in that the method comprises:
acquiring ultrasonic radar data and a visual overlook image;
performing radar data processing on the ultrasonic radar data to obtain spatial parking space angular points, and performing image identification processing on the visual overlook images to obtain visual parking space angular points;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
according to the target fusion strategy, carrying out fusion processing on the spatial parking space angular points and the visual parking space angular points to obtain target parking space angular points;
generating a target parking space according to the angular point of the target parking space;
selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points, wherein the selection comprises the following steps:
when the number of the visual parking space angular points is two, and a line segment formed by the two visual parking space angular points is the vertical direction of the vehicle driving direction, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
according to the target fusion strategy, the spatial parking space angular points and the visual parking space angular points are fused to obtain target parking space angular points, and the method comprises the following steps:
screening two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle;
combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points;
and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
2. The method of claim 1, wherein the step of obtaining the visual overhead image comprises:
acquiring all around-looking fisheye images acquired by all fisheye cameras around a vehicle;
and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all the all-round looking fisheye image to obtain a corresponding visual overlook image.
3. The method of claim 1, wherein the performing radar data processing on the ultrasonic radar data to obtain a spatial parking space corner point comprises:
converting the ultrasonic radar data into radar coordinates relative to the center of the vehicle according to the installation parameters of the ultrasonic radar;
when the radar abscissa value of the radar coordinate is smaller than a first threshold value, obtaining the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a first sliding window;
determining a first mean value of the radar abscissa values within the first sliding window and a first deviation value between each radar abscissa value and the first mean value;
when a first deviation value between the radar abscissa value and the first mean value is larger than a second threshold value, searching the radar abscissa value from back to front in the first sliding window according to the time sequence, completing the search when the first deviation value between the first target radar abscissa value and the first mean value is smaller than a third threshold value, and recording a first radar coordinate and a first global position coordinate corresponding to the first target radar abscissa value;
determining a second average value of radar abscissa values corresponding to the ultrasonic radar data;
when a second deviation value between the radar abscissa value and the second average value is larger than a second threshold value, acquiring the current global position coordinate of the vehicle, and storing the current global position coordinate and the corresponding current radar coordinate into a second sliding window;
determining a third mean value of the radar abscissa values in the second sliding window;
when a third deviation value between the radar abscissa value and the third mean value is smaller than a fourth threshold value, searching radar abscissa values from front to back in the second sliding window according to the time sequence, completing searching when a third deviation value between a second target radar abscissa value and the third mean value is smaller than the third threshold value, and recording a second radar coordinate and a second global position coordinate corresponding to the second target radar abscissa value;
and generating a space parking space angular point according to the first radar coordinate, the first global position coordinate, the second average value, the second radar coordinate and the second global position coordinate.
4. The method according to claim 3, wherein selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking stall corner points comprises:
when the number of the visual parking space angular points is at least three, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
according to the target fusion strategy, the spatial parking space angular points and the visual parking space angular points are fused to obtain target parking space angular points, and the method comprises the following steps:
dividing the parking space image formed by the visual parking space angular points into unit grids with preset sizes; the attributes of the cells comprise an occupation attribute and a null attribute;
converting radar coordinates corresponding to the spatial parking space angular points into pixel point coordinates on the parking space image;
when the attribute of the cell in which the pixel point coordinate is located is a null attribute, modifying the attribute of the cell into an occupation attribute;
when the occupation ratio between the cells occupying the attributes and all the cells in the parking space image is smaller than a fifth threshold value, sequentially searching all the cells in the parking space image according to the sequence of the virtual vertical coordinates of the cells from small to large to obtain a cell area of a maximum continuous string formed by the cells with empty attributes;
and generating a target parking space angular point according to the cell area.
5. The method of claim 4, wherein generating a target parking space corner point according to the cell region comprises:
converting the virtual coordinates of each cell in the cell area into real coordinates relative to the center of the vehicle; the converted cell size is the real size;
and generating a target parking space angular point according to the real coordinates and the real sizes of the cells and the number of the cells in the cell area.
6. The method according to claim 1, wherein selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking stall corner points comprises:
when the number of the visual parking space angular points is one or zero, selecting a corresponding target fusion strategy from preset candidate fusion strategies;
according to the target fusion strategy, the spatial parking space angular points and the visual parking space angular points are fused to obtain target parking space angular points, and the method comprises the following steps:
and taking the space parking space angular point as a target parking space angular point.
7. A parking space detection apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring ultrasonic radar data and a visual overlook image;
the processing module is used for performing radar data processing on the ultrasonic radar data to obtain a spatial parking space angular point, and performing image identification processing on the visual overlook image to obtain a visual parking space angular point;
the selection module is used for selecting a corresponding target fusion strategy from preset candidate fusion strategies according to the number of the visual parking space angular points;
the fusion module is used for fusing the spatial parking space angular points and the visual parking space angular points according to the target fusion strategy to obtain target parking space angular points;
the generating module is used for generating a target parking space according to the angular point of the target parking space;
the selection module is further used for selecting a corresponding target fusion strategy from preset candidate fusion strategies when the number of the visual parking space angular points is two and a line segment formed by the two visual parking space angular points is the vertical direction of the vehicle driving direction;
the fusion module is also used for screening out two target space parking space angular points from the space parking space angular points; a line segment formed by the parking space angular points of the two target spaces is parallel to the driving direction of the vehicle; combining the two visual parking space angular points with the two target space parking space angular points to obtain four candidate parking space angular points; and when the width and the height of a candidate parking space formed by the four candidate parking space angular points are larger than the preset width and height, taking the candidate parking space angular points as target parking space angular points.
8. The device of claim 7, wherein the obtaining module is further configured to obtain all-round fisheye images collected by all fisheye cameras around the vehicle; and carrying out distortion removal processing, splicing processing and perspective transformation processing on each all the all-round looking fisheye image to obtain a corresponding visual overlook image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202011528438.9A 2020-12-22 2020-12-22 Parking space detection method and device, computer equipment and storage medium Active CN112633152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011528438.9A CN112633152B (en) 2020-12-22 2020-12-22 Parking space detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011528438.9A CN112633152B (en) 2020-12-22 2020-12-22 Parking space detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112633152A CN112633152A (en) 2021-04-09
CN112633152B true CN112633152B (en) 2021-11-26

Family

ID=75320938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011528438.9A Active CN112633152B (en) 2020-12-22 2020-12-22 Parking space detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112633152B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113513983B (en) * 2021-06-30 2023-05-16 广州小鹏自动驾驶科技有限公司 Precision detection method and device, electronic equipment and medium
CN113513984B (en) * 2021-06-30 2024-01-09 广州小鹏自动驾驶科技有限公司 Parking space recognition precision detection method and device, electronic equipment and storage medium
CN113486795A (en) * 2021-07-06 2021-10-08 广州小鹏自动驾驶科技有限公司 Visual identification performance test method, device, system and equipment
CN113903188B (en) * 2021-08-17 2022-12-06 浙江大华技术股份有限公司 Parking space detection method, electronic device and computer readable storage medium
CN114030463B (en) * 2021-11-23 2024-05-14 上海汽车集团股份有限公司 Path planning method and device for automatic parking system
CN114419922B (en) * 2022-01-17 2023-04-07 北京经纬恒润科技股份有限公司 Parking space identification method and device
CN114882701B (en) * 2022-04-28 2023-01-24 上海高德威智能交通系统有限公司 Parking space detection method and device, electronic equipment and machine readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108281041A (en) * 2018-03-05 2018-07-13 东南大学 A kind of parking space's detection method blended based on ultrasonic wave and visual sensor
CN109435942A (en) * 2018-10-31 2019-03-08 合肥工业大学 A kind of parking stall line parking stall recognition methods and device based on information fusion
CN110490172A (en) * 2019-08-27 2019-11-22 北京茵沃汽车科技有限公司 Information merges parking stall position compensation method, the system, device, medium parked
CN110775052A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium
CN111994081A (en) * 2020-08-26 2020-11-27 安徽江淮汽车集团股份有限公司 Parking space detection method, equipment, storage medium and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108352114B (en) * 2015-10-27 2022-11-01 市政停车服务公司 Parking space detection method and system
CN109544990A (en) * 2018-12-12 2019-03-29 惠州市德赛西威汽车电子股份有限公司 A kind of method and system that parking position can be used based on real-time electronic map identification
CN111497829B (en) * 2020-04-14 2022-08-02 浙江吉利汽车研究院有限公司 Full-automatic parking path determination method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108281041A (en) * 2018-03-05 2018-07-13 东南大学 A kind of parking space's detection method blended based on ultrasonic wave and visual sensor
CN109435942A (en) * 2018-10-31 2019-03-08 合肥工业大学 A kind of parking stall line parking stall recognition methods and device based on information fusion
CN110490172A (en) * 2019-08-27 2019-11-22 北京茵沃汽车科技有限公司 Information merges parking stall position compensation method, the system, device, medium parked
CN110775052A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium
CN111994081A (en) * 2020-08-26 2020-11-27 安徽江淮汽车集团股份有限公司 Parking space detection method, equipment, storage medium and device

Also Published As

Publication number Publication date
CN112633152A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112633152B (en) Parking space detection method and device, computer equipment and storage medium
CN111160302B (en) Obstacle information identification method and device based on automatic driving environment
CN111797650B (en) Obstacle identification method, obstacle identification device, computer equipment and storage medium
CN111160172B (en) Parking space detection method, device, computer equipment and storage medium
CN110634153A (en) Target tracking template updating method and device, computer equipment and storage medium
CN111753649B (en) Parking space detection method, device, computer equipment and storage medium
US20160019683A1 (en) Object detection method and device
CN112753038B (en) Method and device for identifying lane change trend of vehicle
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN111009011B (en) Method, device, system and storage medium for predicting vehicle direction angle
US20240029448A1 (en) Parking space detection method, apparatus, device and storage medium
CN112947419A (en) Obstacle avoidance method, device and equipment
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN116563384A (en) Image acquisition device calibration method, device and computer device
CN115273039A (en) Small obstacle detection method based on camera
KR20100066952A (en) Apparatus for tracking obstacle using stereo vision and method thereof
CN117496515A (en) Point cloud data labeling method, storage medium and electronic equipment
CN116681739A (en) Target motion trail generation method and device and electronic equipment
US20230109473A1 (en) Vehicle, electronic apparatus, and control method thereof
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
CN111242118B (en) Target detection method, device, computer equipment and storage medium
CN116824152A (en) Target detection method and device based on point cloud, readable storage medium and terminal
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN116469101A (en) Data labeling method, device, electronic equipment and storage medium
CN116310899A (en) YOLOv 5-based improved target detection method and device and training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518051 1101, west block, Skyworth semiconductor design building, 18 Gaoxin South 4th Road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address