CN109918977B - Method, device and equipment for determining idle parking space - Google Patents

Method, device and equipment for determining idle parking space Download PDF

Info

Publication number
CN109918977B
CN109918977B CN201711331766.8A CN201711331766A CN109918977B CN 109918977 B CN109918977 B CN 109918977B CN 201711331766 A CN201711331766 A CN 201711331766A CN 109918977 B CN109918977 B CN 109918977B
Authority
CN
China
Prior art keywords
parking space
vehicle
determining
coordinate system
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711331766.8A
Other languages
Chinese (zh)
Other versions
CN109918977A (en
Inventor
叶超强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711331766.8A priority Critical patent/CN109918977B/en
Publication of CN109918977A publication Critical patent/CN109918977A/en
Application granted granted Critical
Publication of CN109918977B publication Critical patent/CN109918977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for determining an idle parking space, and belongs to the image recognition technology in the field of automatic driving. The method comprises the following steps: the method comprises the steps of obtaining a vehicle image comprising at least two vehicles, and determining respective two-dimensional surrounding frames of two adjacent vehicles in the at least two vehicles according to the vehicle image; determining the position information of at least one characteristic position point on the bottom external frame of each two adjacent vehicles according to the two-dimensional surrounding frames of each two adjacent vehicles; according to the position information of at least one characteristic position point on the bottom external frame of two adjacent vehicles, the free parking space between the two adjacent vehicles is determined, and the free parking space between the two vehicles can be determined only by identifying the two adjacent vehicles of the two-dimensional surrounding frame in the vehicle image, so that the identification effect of the free parking space can be improved when the vehicle is applied to an intelligent vehicle/electric vehicle/new energy vehicle.

Description

Method, device and equipment for determining idle parking space
Technical Field
The application relates to the technical field of image recognition, in particular to a method, a device and equipment for determining an idle parking space.
Background
Automatic parking is an important function in the field of automatic driving, and the intelligent degree of a vehicle is directly reflected. The realization of the automatic parking function requires that the vehicle can automatically determine the position of the free parking space.
In the related art, automatic identification of an idle parking space is mainly realized by identifying a parking space line. Specifically, an image acquisition assembly arranged in the vehicle acquires images around the vehicle, and the parking space identification device identifies the images acquired by the image acquisition assembly to determine a parking space line in the images and determine the position of an idle parking space according to the identified parking space line.
However, in practical applications, the parking space lines are various in types, and include printed lines, grass embedding bricks, cement bricks, and the like, but the schemes in the related art can only recognize the parking space lines with distinct and clear printed lines, and cannot recognize other types of parking space lines, which results in poor parking space recognition effect.
Disclosure of Invention
In order to improve the recognition effect of the parking space, the embodiment of the application provides a method, a device and equipment for determining an idle parking space.
In a first aspect, a method for determining an idle parking space is provided, the method comprising:
acquiring a vehicle image containing at least two vehicles; determining respective two-dimensional surrounding frames of two adjacent vehicles in the at least two vehicles according to the vehicle images, wherein the two-dimensional surrounding frames are circumscribed rectangular frames of the corresponding vehicles in the vehicle images; determining position information of at least one characteristic position point on a bottom external frame of each two adjacent vehicles according to the two-dimensional surrounding frames of each two adjacent vehicles, wherein the bottom external frame is a circumscribed rectangular frame of an orthographic projection of the corresponding vehicle on the ground, and the characteristic position point is used for indicating a point on a preset position of the corresponding bottom external frame; and determining the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frame of the two adjacent vehicles.
In the scheme, the two-dimensional surrounding frames of two adjacent vehicles in the vehicle image are determined, the position information of at least one characteristic position point in the bottom external frame of each two adjacent vehicles is determined according to the two-dimensional surrounding frames of the two adjacent vehicles, and then the free parking space between the two adjacent vehicles is determined.
Optionally, the determining, according to the respective two-dimensional bounding boxes of the two adjacent vehicles, the position information of the at least one feature position point on the respective bottom bounding box of the two adjacent vehicles includes: determining coordinates of three vertexes of the bottom external frame of each two adjacent vehicles in the vehicle body coordinate system according to the two-dimensional surrounding frames of each two adjacent vehicles;
the determining the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frame of the two adjacent vehicles comprises: and determining the coordinates of the free parking space between the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the three vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system.
In the alternative scheme, the coordinates of three vertexes of the bottom external frames of two adjacent vehicles in a vehicle body coordinate system are identified through the two-dimensional surrounding frames of the two adjacent vehicles, the coordinates of the idle parking spaces in the vehicle body coordinate system are further determined, the accurate positions of the idle parking spaces can be directly obtained through vehicle images, and therefore accurate positioning of the idle parking spaces is achieved.
Optionally, the determining, according to the respective two-dimensional surrounding frames of the two adjacent vehicles, coordinates of three vertexes of the respective bottom external frame of the two adjacent vehicles in the vehicle body coordinate system includes:
for any vehicle i in the two adjacent vehicles, determining the relative positions of three key points on a two-dimensional bounding box of the vehicle i through a key point identification model; the three key points are three feature points corresponding to three vertexes of a bottom extension frame of the vehicle i; the key point identification model is obtained by performing machine training in advance according to a sample image, and the sample image is an image with three key points marked on a two-dimensional surrounding frame in advance; calculating two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in a coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional surrounding frame of the vehicle i; and calculating the coordinates of three vertexes in the bottom external frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image.
In the alternative scheme, machine learning training is performed on a sample image marked with three key points on a two-dimensional surrounding frame in advance to obtain a key point identification model, in an application stage, the image in the two-dimensional surrounding frame to be identified is identified through the key point identification model to determine the three key points, so that automatic identification of the three key points on the two-dimensional surrounding frame is achieved, in addition, coordinates of the three vertexes of the bottom external frame of the vehicle in a vehicle body coordinate system are calculated according to the relative positions of the three vertexes of the bottom external frame corresponding to the three key points on the two-dimensional surrounding frame, and the method for determining the coordinates of the bottom external frame of the vehicle in the vehicle body coordinate system according to the two-dimensional image is provided.
Optionally, the relative positions of three key points on the two-dimensional bounding box of the vehicle i include: a first percentage between an offset amount of a first key point relative to a lower left vertex of the two-dimensional bounding box of the vehicle i and a left linear length of the two-dimensional bounding box of the vehicle i, a second percentage between an offset amount of a second key point relative to the lower left vertex of the two-dimensional bounding box of the vehicle i and a lower linear length of the two-dimensional bounding box of the vehicle i, and a third percentage between an offset amount of a third key point relative to the lower right vertex of the two-dimensional bounding box of the vehicle i and a right linear length of the two-dimensional bounding box of the vehicle i, wherein the first key point is a feature point on a left edge line of the two-dimensional bounding box of the vehicle i, the second key point is a feature point on a lower edge line of the two-dimensional bounding box of the vehicle i, and the third key point is a feature point on a right edge line of the two-dimensional bounding box of the vehicle i;
the calculating the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional bounding box of the vehicle i comprises:
calculating the two-dimensional coordinates of each of the first keypoint, the second keypoint, and the third keypoint in the coordinate system of the vehicle image according to the first percentage, the second percentage, the third percentage, and the coordinates of the four vertices of the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image.
In the alternative, when three key points of the bottom external frame of the vehicle on the two-dimensional surrounding frame are identified and determined, the key points are represented by the deviation percentages of the key points on the two-dimensional surrounding frame relative to the top points of the two-dimensional surrounding frame, so that the parameter intervals of the three key points of the vehicles at different distances are kept consistent, and the scale change in different scenes can be well coped with.
Optionally, the calculating, according to the two-dimensional coordinates of the three key points on the two-dimensional enclosure frame of the vehicle i in the coordinate system of the vehicle image, the coordinates of the three vertices of the bottom peripheral frame of the vehicle i in the body coordinate system includes: acquiring an internal reference matrix and an external reference matrix of image acquisition equipment, wherein the image acquisition equipment is equipment for acquiring the vehicle image; and calculating the coordinates of three vertexes of the bottom outer frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image, the inner reference matrix and the outer reference matrix.
In the alternative scheme, the two-dimensional coordinates of the key points in the coordinate system of the vehicle image are converted into the coordinates of three vertexes of the bottom peripheral frame of the vehicle in the vehicle body coordinate system according to the internal reference matrix and the external reference matrix of the image acquisition equipment, and the conversion from the two-dimensional coordinates in the image to the coordinates in the vehicle body coordinate system is realized.
Optionally, the determining, according to coordinates of three vertexes of the bottom external frame of the two adjacent vehicles in the vehicle body coordinate system, coordinates of a free parking space between the two adjacent vehicles in the vehicle body coordinate system includes:
calculating the coordinates of the fourth vertexes of the respective bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the three vertexes of the respective bottom external frames of the two adjacent vehicles in the vehicle body coordinate system; calculating coordinates of the central points of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the four vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system; determining the type of the parking space according to an included angle between the long edge direction of the parking space and the connecting line of the central points, wherein the long edge direction of the parking space is determined according to the long edge direction of the bottom external frame of at least one of the two adjacent vehicles, the connecting line of the central points is the connecting line between the central points of the bottom external frames of the two adjacent vehicles, and the type of the parking space comprises a vertical parking space, a parallel parking space or an oblique parking space; determining the number n of parking spaces according to the parking space type, wherein the number n of the parking spaces is the number of the parking spaces between the two adjacent vehicles; when n is an integer greater than or equal to 2, determining coordinates of n-1 equally-divided points on the central point connecting line in the vehicle body coordinate system, wherein the n-1 equally-divided points divide the central point connecting line into n equal parts; and determining the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system according to the long side direction of the parking spaces and the coordinates of the n-1 equally divided points in the vehicle body coordinate system.
In this alternative, a method is provided for obtaining coordinates of an empty parking space between two adjacent vehicles in the vehicle body coordinate system by combining a center point connecting line of bottom outer frames of the two adjacent vehicles and a parking space direction.
Optionally, the determining, according to the parking space long side direction and the coordinates of the n-1 equally divided points in the vehicle body coordinate system, the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system includes:
aiming at any bisector i in the n-1 bisectors, respectively moving h/2 in the front direction and the back direction corresponding to the long side direction of the parking space from the coordinate of the bisector i in the vehicle body coordinate system, and obtaining the coordinate of the middle point of the two short sides of the idle parking space corresponding to the bisector i in the vehicle body coordinate system; starting from the coordinates of the middle points of the two short sides of the free parking space corresponding to the bisector point i in the vehicle body coordinate system, respectively moving the middle points by w/2 along the positive and negative directions corresponding to the short side directions of the free parking space corresponding to the bisector point i, and obtaining the coordinates of the four vertexes of the free parking space corresponding to the bisector point i in the vehicle body coordinate system; h is the length of a preset parking space, w is the width of the preset parking space, and the short side direction of the idle parking space corresponding to the equal division point i is perpendicular to the long side direction of the parking space;
or for any bisector i in the n-1 bisectors, respectively moving the bisector i by w/2 in the front and back directions corresponding to the short side directions of the vacant parking spaces corresponding to the bisector i from the coordinates of the bisector i in the vehicle body coordinate system, and obtaining the coordinates of the middle points of the two long sides of the vacant parking spaces corresponding to the bisector i in the vehicle body coordinate system; starting from the coordinates of the middle point of the two long sides of the free parking space corresponding to the equal division point i in the vehicle body coordinate system, respectively moving the middle point by h/2 along the positive and negative directions corresponding to the long side directions of the parking space, and obtaining the coordinates of the four vertexes of the free parking space corresponding to the equal division point i in the vehicle body coordinate system; wherein, h is for predetermineeing parking stall length, and w is for predetermineeing the parking stall width, the minor face direction of the idle parking stall that the equipartition point i corresponds with the long limit direction of parking stall is perpendicular.
According to the optional scheme, two methods for determining the coordinates of four vertexes of the free parking space in the vehicle body coordinate system according to the long side direction of the parking space and the coordinates of the bisector i in the vehicle body coordinate system are provided.
Optionally, confirm the parking stall type according to the contained angle between long limit direction of parking stall and the central point line, include:
when the included angle is 90 degrees, determining that the parking space type is a vertical parking space;
when the included angle is 0 degree, determining that the parking space type is a parallel parking space;
and when the included angle is in the interval of (30 degrees, 60 degrees) or (120 degrees, 150 degrees), determining that the parking space type is an oblique parking space.
Optionally, calculating the number n of parking spaces according to the parking space type includes:
when the parking space type is a vertical parking space, n is the largest integer less than or equal to d/w;
when the parking space type is a parallel parking space, n is a maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
According to the optional scheme, a method for determining the number of free parking spaces between two adjacent vehicles according to the parking space types and the lengths of two lines of the center points of the bottom external frames of the two adjacent vehicles is provided.
In a second aspect, a method for determining an empty parking space is provided, the method comprising: acquiring spatial perception data comprising the at least two vehicles; obtaining position information of at least one characteristic position point on a bottom external frame of each of two adjacent vehicles in the at least two vehicles according to the spatial perception data, wherein the bottom external frame is an external rectangular frame corresponding to orthographic projection of the vehicles on the ground, and the characteristic position point is used for indicating a point on a preset position of the bottom external frame; and determining the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frame of the two adjacent vehicles.
In the scheme, the position information of at least one characteristic position point on the bottom external frame of two adjacent vehicles in the space perception data is recognized, and the idle parking space between the two adjacent vehicles is determined according to the position information of at least one characteristic position point on the bottom external frame.
In a third aspect, a model training method is provided, the method including: obtaining a sample image, wherein the sample image is an image which is marked with three key points on a two-dimensional surrounding frame in advance, the two-dimensional surrounding frame is an external rectangular frame of a vehicle, the three key points are three characteristic points of the vehicle in which three vertexes in a bottom external frame correspond to the sample image, the three vertexes are used for indicating the three vertexes of the bottom external frame visible in the sample image, the three key points are respectively located on a left side line, a lower side line and a right side line of the two-dimensional surrounding frame, and the bottom external frame is an external rectangular frame corresponding to the orthographic projection of the vehicle on the ground; and performing model training according to the sample image to obtain a key point identification model, wherein the key point identification model is used for identifying the relative positions of three vertexes, corresponding to three key points in the input image, in a bottom external frame of the vehicle to be identified in the input image, and the input image is an image in a two-dimensional surrounding frame of the vehicle to be identified.
In the scheme, machine learning training is performed on a sample image marked with three key points on a two-dimensional bounding box in advance to obtain a key point identification model, so that in an application stage, the image in the two-dimensional bounding box to be identified is identified through the key point identification model to determine the three key points, and automatic identification of the three key points on the two-dimensional bounding box is realized.
In a fourth aspect, a device for determining an empty space is provided, where the device has a function of implementing the method for determining an empty space provided in the first aspect and the optional implementation scheme of the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units corresponding to the above functions.
In a fifth aspect, a device for determining an empty parking space is provided, and the device has a function of implementing the method for determining an empty parking space provided in the second aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units corresponding to the above functions.
In a sixth aspect, there is provided a model training apparatus having a function of implementing the model training method provided in the third aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units corresponding to the above functions.
In a seventh aspect, a parking space determining device is provided, including: the processor is configured to call the programmable instruction stored in the storage to execute the method for determining the free parking space according to the first aspect and the optional implementation scheme of the first aspect.
In an eighth aspect, a parking space determination device is provided, which includes: the processor is used for calling the programmable instructions stored on the storage to execute the method for determining the free parking space according to the second aspect.
In a ninth aspect, there is provided a parking space determining apparatus, including: a processor and a memory, the memory for storing programmable instructions, the processor for invoking the programmable instructions stored on the memory to perform the model training method as described in the third aspect above.
In a tenth aspect, a computer-readable storage medium is provided, where the computer-readable storage medium contains programmable instructions, and a computer executes the programmable instructions to implement the method for determining a vacant parking space according to the first aspect and the optional implementation of the first aspect.
In an eleventh aspect, a computer-readable storage medium is provided, where the computer-readable storage medium contains programmable instructions, and a computer executes the programmable instructions to implement the method for determining a free parking space according to the second aspect.
In a twelfth aspect, there is provided a computer-readable storage medium having programmable instructions embodied therein, the computer executing the programmable instructions to implement the model determination method according to the third aspect.
In a thirteenth aspect, a computer program is provided, where the computer program includes at least one programmable instruction, and when the computer program runs in a computer, the computer may execute the at least one programmable instruction to implement the method for determining an empty space according to the first aspect and the optional implementation of the first aspect.
In a fourteenth aspect, a computer program is provided, where the computer program includes at least one programmable instruction, and when the computer program runs in a computer, the computer can execute the at least one programmable instruction to implement the method for determining a free parking space according to the second aspect.
In a fifteenth aspect, there is provided a computer program comprising at least one programmable instruction which, when the computer program is run on a computer, the computer can execute the at least one programmable instruction to implement the model determination method according to the third aspect.
Drawings
Fig. 1 is an architecture diagram of a system for determining vacant parking spaces according to an embodiment of the present application;
fig. 2 is a flowchart of a method for determining an empty space according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of an image inspection according to the embodiment shown in FIG. 2;
FIG. 4 is a schematic diagram of a two-dimensional frame inspection according to the embodiment of FIG. 2;
FIG. 5 is a schematic illustration of a visible keypoint involved in the embodiment of FIG. 2;
FIG. 6 is a flow chart of a method of model training according to the embodiment shown in FIG. 2;
FIG. 7 is a schematic diagram of a second deep neural network regression keypoint normalization value according to the embodiment shown in FIG. 2;
FIG. 8 is a schematic diagram of the convolution and prediction scheme involved in the embodiment of FIG. 2;
FIG. 9 is a schematic diagram of a camera imaging according to the embodiment of FIG. 2;
FIG. 10 is a schematic view of a vehicle underbody bezel according to the embodiment of FIG. 2;
FIG. 11 is a flow chart of identifying free parking spaces according to the embodiment shown in FIG. 2;
FIG. 12 is a flow chart of the identification of the type of free parking space according to the embodiment shown in FIG. 2;
FIG. 13 is a schematic view of a parking space type according to the embodiment shown in FIG. 2;
FIG. 14 is a flow chart of the identification of the position of the vacant parking space according to the embodiment shown in FIG. 2;
FIG. 15 is a schematic view of two adjacent vehicles in the embodiment of FIG. 2 without empty spaces therebetween;
FIG. 16 is a schematic diagram of two adjacent vehicles with free parking spaces therebetween according to the embodiment shown in FIG. 2;
FIG. 17 is a schematic illustration of a determination of pointing coordinates according to the embodiment of FIG. 2;
FIG. 18 is a flowchart of a method for determining free parking space provided by an exemplary embodiment of the present application;
fig. 19 is a schematic structural diagram of a parking space determining device according to an exemplary embodiment of the present application;
FIG. 20 is a schematic diagram of a model training apparatus provided in an exemplary embodiment of the present application;
fig. 21 is a block diagram of an apparatus for determining an empty space according to an exemplary embodiment of the present application;
FIG. 22 is a block diagram of a model training apparatus provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is an architecture diagram of a system for determining a vacant parking space according to an embodiment of the present application. As shown in fig. 1, the system for determining an empty space may include a data acquisition device 110 and a space determination device 120.
The data acquisition device 110 may be a camera device that acquires a two-dimensional image; alternatively, the data acquisition device 110 may also be a device for acquiring spatially-perceived data, for example, the data acquisition device 110 may be a radar detection device for acquiring radar point cloud data, or the data acquisition device 110 may also be a laser detection device for acquiring laser point cloud data, and the like.
The space determining device 120 may be a computer device with data recognition processing capability.
The data collecting device 110 may be disposed in a vehicle, collect a vehicle image around the vehicle, and send the collected vehicle data information to the parking space determining device 120, where the parking space determining device 120 identifies the vehicle data information to determine whether there is an empty parking space in the vehicle data information, and determine spatial orientation information of the empty parking space (for example, information such as coordinates of vertexes of the parking space in a vehicle body coordinate system), so as to perform subsequent applications such as automatic parking.
The system for determining the vacant parking space may be independently mounted in the vehicle, for example, the data acquisition device 110 and the parking space determination device 120 are both disposed in the vehicle, and the data acquisition device 110 and the parking space determination device 120 communicate with each other through a wired network.
Alternatively, the system for determining the free parking space may also be deployed in a vehicle and a network server, respectively, for example, the data acquisition device 110 may be disposed in the vehicle, the parking space determination device 120 may be disposed in the network server, and the data acquisition device 110 and the parking space determination device 120 communicate with each other through a wireless network.
In actual parking area or roadside parking stall, in order to facilitate the management, the parking stall generally all concentrates according to certain law and sets up, for example, set up a plurality of parking stall in the parking area usually relatively the lane is perpendicular or the slant is continuous, and is parallel to each other between this a plurality of parking stall, and when the roadside sets up the parking stall, set up a plurality of parking stall in succession for the lane is parallel usually, and end to end between this a plurality of parking stall. Therefore, for a vehicle in a parked state, when the distance between two adjacent vehicles is sufficient, there is a large possibility that there is a free space between the two adjacent vehicles.
Based on the principle, the scheme shown in the embodiment of the application can determine whether an idle parking space exists between two vehicles, how many idle parking spaces exist between two vehicles and the position information of each idle parking space through the distance relationship between two adjacent vehicles. Specifically, for example, when the parking space determining device 120 identifies the vehicle data information to determine the free parking space, the parking space determining device may obtain the vehicle data information including at least two vehicles, obtain the position information of at least one characteristic position point on the bottom external frame of each of two adjacent vehicles in the at least two vehicles according to the vehicle data information, and determine the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frame of each of the two adjacent vehicles. By the method, as long as the collected vehicle data information contains at least two vehicles capable of identifying the bottom external frame, the idle parking space between the two vehicles can be identified, the vehicle position line on the ground is not required to be identified, even the vehicle position line exists on the ground, and the identification accuracy of the idle parking space is higher; in addition, according to the method disclosed by the embodiment of the application, the identification of the idle parking spaces can be realized only by acquiring the two-dimensional images or the space sensing data containing at least two adjacent vehicles, so that the idle parking spaces between the remote vehicles can be identified, the identification distance of the idle parking spaces is long, and the identification effect is better.
In this embodiment of the application, the parking space determining device 120 may perform the identification of the vacant parking space according to the two-dimensional image, or may perform the identification of the vacant parking space according to the spatial sensing data. The embodiment shown in fig. 2 will be described below by taking the identification of a free parking space from a two-dimensional image as an example.
Please refer to fig. 2, which illustrates a flowchart of a method for determining an empty space according to an exemplary embodiment of the present application. The method may be used in the space determining device 120 of the system shown in fig. 1. As shown in fig. 2, the method for determining the free parking space may include:
step 201, a vehicle image including at least two vehicles is acquired.
In the embodiment of the application, the data acquisition device can acquire images around a vehicle and send the images around the vehicle to the parking space determination device, after the parking space determination device receives the images acquired by the image acquisition device, the parking space determination device performs primary identification on the received images to determine whether the images contain two or more vehicles, if yes, the vehicle images are determined to be acquired, and otherwise, the vehicle images are not acquired.
Step 202, determining respective two-dimensional surrounding frames of two adjacent vehicles in the at least two vehicles according to the vehicle image, where the two-dimensional surrounding frames are circumscribed rectangular frames of the corresponding vehicles in the vehicle image.
The external rectangular frame of the object in the image refers to a rectangular frame with four sides respectively connected with the uppermost pixel point, the lowermost pixel point, the leftmost pixel point and the rightmost pixel point of the object in the image, and the external rectangular frame can be generally used for representing the position of the object in the image. In the embodiment of the present application, the position of the vehicle in the image is represented by a circumscribed rectangular frame (i.e., a two-dimensional surrounding frame) of the vehicle in the image.
In this embodiment, the vehicle image may be subjected to image recognition by a bounding box recognition model (for example, the bounding box recognition model may be a first deep neural network) to recognize two-dimensional bounding boxes of two adjacent vehicles in at least two vehicles in the vehicle image, the bounding box recognition model may be a recognition model obtained by performing machine training in advance according to a first sample image, and the sample image may be an image of a two-dimensional bounding box pre-labeled with each vehicle.
The bounding box recognition model may be obtained by performing machine training through a model training device, where the model training device and the parking space determining device may be two different computer devices, for example, the model training device is a server on a network side, and the parking space determining device is a vehicle-mounted device, or the model training device and the parking space determining device may be the same device, for example, the model training device and the parking space determining device are implemented in the same server on the network side.
For example, before the parking space determining device performs image recognition on the vehicle image, a developer may collect a plurality of first sample images including vehicles, mark the two-dimensional surrounding frame of each vehicle in the first sample image, input the first sample image marked with the two-dimensional surrounding frame into the model training device, train the model training device according to the first sample image marked with the two-dimensional surrounding frame to obtain the surrounding frame recognition model, and perform image recognition according to the surrounding frame recognition model obtained by the training of the model training device when the subsequent parking space determining device performs image recognition on the vehicle image, so as to obtain the two-dimensional surrounding frame of each of at least two adjacent vehicles included in the vehicle image.
For example, please refer to fig. 3, which shows an image detection schematic diagram according to an embodiment of the present application, as shown in fig. 3, after receiving an input picture, a first deep neural network first performs feature extraction on the input picture to obtain a small-sized feature map (for example, the small-sized feature map may be extracted by a multi-layer convolution), and performs two-dimensional bounding box detection according to the small-sized feature map.
For example, please refer to fig. 4, which shows a two-dimensional frame detection diagram according to an embodiment of the present application, and as shown in fig. 4, the first deep neural network presets k reference bounding boxes at each pixel position of a small-size feature map, and determines a position of a real two-dimensional bounding box in an image by predicting a position deviation with respect to each reference bounding box. In fig. 4, 4k pieces of position data output via the intermediate layer indicate positional deviations from k pieces of reference frames, and 2k pieces of category data output via the intermediate layer indicate categories (such as backgrounds or objects) corresponding to the reference bounding box regions, and the first deep neural network selects a reference frame containing an object by a preset threshold value, and then obtains a final recognition result of a two-dimensional bounding box according to the corresponding deviation value.
Optionally, the first deep Neural network for identifying the two-dimensional bounding box of the vehicle may be a fast Region Convolutional Neural network (fast RCNN) network, a Single-case Detector (SSD) network, a uniform Real-Time Object Detection (Unified) network, and the like, where the uniform Real-Time Object Detection network is also called a YOLO (all-around youonly Look one) network. The embodiment of the present application does not limit the type of the first deep neural network.
After the parking space identifying device executes step 202 to obtain the respective two-dimensional surrounding frames of the two adjacent vehicles, the position information of at least one characteristic position point on the respective bottom external frame of the two adjacent vehicles can be determined according to the respective two-dimensional surrounding frames of the two adjacent vehicles, the bottom external frame is an external rectangular frame of the orthographic projection of the corresponding vehicle on the ground, the characteristic position point is a point on the preset position of the bottom external frame, and the free parking space between the two adjacent vehicles can be determined according to the position information of at least one characteristic position point on the bottom external frame of the two adjacent vehicles.
Specifically, for example, when the position information of the at least one feature position point on the bottom bounding box of each two adjacent vehicles is determined according to the two-dimensional bounding boxes of each two adjacent vehicles, the parking space identification device may determine the coordinates of the three vertices of the bottom bounding boxes of each two adjacent vehicles in the vehicle body coordinate system according to the two-dimensional bounding boxes of each two adjacent vehicles, by taking the case that the at least one feature position point includes the three vertices of the corresponding bottom bounding box, the three vertices being used for indicating the three vertices of the bottom bounding box visible in the vehicle image, and the position information being used for indicating the coordinates in the vehicle body coordinate system. The process of determining the coordinates of the three vertices of the bottom bounding boxes of two adjacent vehicles in the vehicle body coordinate system refers to the following steps 203 to 204.
Step 203, for any vehicle i in two adjacent vehicles, determining two-dimensional coordinates of three key points on a two-dimensional surrounding frame of the vehicle i in a coordinate system of the vehicle image, where the three key points are three feature points corresponding to three vertexes of a bottom peripheral frame of the vehicle i.
The bottom external frame of the vehicle is a rectangular frame, and the four vertexes of the rectangular frame are 4 characteristic position points of the bottom external frame of the vehicle. For a vehicle capable of detecting a two-dimensional bounding box, the number of feature points corresponding to the feature points visible in the vehicle image is usually 3 in 4 vertices of the bottom bounding box of the vehicle, the 3 vertices of the bottom bounding box of the vehicle corresponding to the feature points in the vehicle image may be referred to as 3 key points corresponding to the vehicle, and the 3 visible key points are respectively located on the left edge, the lower edge and the right edge of the two-dimensional bounding box. For example, referring to fig. 5, which shows a schematic diagram of a visible keypoint related to an embodiment of the present application, as shown in fig. 5, the bottom peripheral frame 51 of the vehicle corresponds to three visible keypoints in the vehicle image, which are P1, P2, and P3, respectively, where the keypoint P1 is located at the left edge of the two-dimensional vehicle enclosure frame 52, the keypoint P2 is located at the lower edge of the two-dimensional vehicle enclosure frame 52, the keypoint P3 is located at the right edge of the two-dimensional vehicle enclosure frame 52, and the keypoint P4 corresponding to the fourth vertex in the bottom peripheral frame 51 of the vehicle is in an occluded state and is not visible in the image area enclosed by the two-dimensional vehicle enclosure frame.
Optionally, for any vehicle i of two adjacent vehicles, when determining the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image, the parking space identification device may determine the relative positions of the three key points on the two-dimensional bounding box of the vehicle i through the key point identification model, and calculate the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional bounding box of the vehicle i.
The key point identification model is obtained by performing machine training in advance according to a second sample image, and the second sample image is an image in which three key points are marked on a two-dimensional surrounding frame in advance. For example, in an embodiment of the present invention, the keypoint identification model may be a second deep neural network.
Referring to fig. 6, a flowchart of a method of training a model according to an embodiment of the present invention is shown, and as shown in fig. 6, the method of training a model may include the following steps:
step 61, a second sample image is acquired.
Wherein, when acquiring the second sample image, the model training device may receive the second sample image input by the developer.
And 62, performing model training according to the second sample image to obtain a key point identification model.
The key point identification model is used for identifying the relative positions of three vertexes in a bottom external frame of the vehicle to be identified in an input image corresponding to three key points in the input image, and the input image is an image in a two-dimensional surrounding frame of the vehicle to be identified.
For example, before the parking space determining device identifies the images in the two-dimensional enclosure of the vehicle, the developer may collect sample images of the two-dimensional enclosure of several vehicles, respectively marking three key points on the two-dimensional surrounding frames in the sample images of the two-dimensional surrounding frames of a plurality of vehicles to obtain second sample images marked with the three key points, inputting the second sample images marked with the key points into model training equipment, training by the model training equipment according to the second sample images to obtain the key point identification model, and identifying the images in the two-dimensional surrounding frames of the vehicles by subsequent parking space determining equipment according to the key point identification model obtained by the training of the model training equipment to identify the images, namely, the parking space determining device inputs the image surrounded by the two-dimensional surrounding frame of the vehicle into the key point identification model, and the relative positions of three key points on the two-dimensional surrounding frame of the vehicle are output by the key point identification model.
In an embodiment of the present invention, the second sample image may be an image in a two-dimensional bounding box marked in the first sample image, that is, after a developer collects a plurality of first sample images including vehicles and marks the two-dimensional bounding box of each vehicle in the first sample image, the two-dimensional bounding box marked with three key points is further marked with three key points, and the image surrounded by the two-dimensional bounding box marked with the three key points is used as the second sample image. Alternatively, the second sample image may be an image in a two-dimensional surrounding frame of the vehicle in another image, which is cut out from an image other than the first sample image.
After the parking space determining device identifies and obtains the relative positions of the three key points on the two-dimensional surrounding frame of the vehicle, the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle in the coordinate system of the vehicle image can be calculated by combining the two-dimensional coordinates of the four vertexes of the two-dimensional surrounding frame of the vehicle in the coordinate system of the vehicle image.
For example, taking the relative positions of the three key points on the two-dimensional bounding box of the vehicle as the relative positions between the three key points and the four vertexes of the two-dimensional bounding box as an example, after the parking space determining device obtains the relative positions between the three key points and the four vertexes of the two-dimensional bounding box through the key point identification model, the two-dimensional coordinates of the three key points in the coordinate system of the vehicle image can be calculated according to the geometric principle by combining the two-dimensional coordinates of the four vertexes of the two-dimensional bounding box in the coordinate system of the vehicle image.
Optionally, for any vehicle i of two adjacent vehicles, when the relative positions of three key points on the two-dimensional bounding box of the vehicle i are determined through the key point identification model, image identification may be performed on the image in the two-dimensional bounding box of the vehicle i through the key point identification model, so as to obtain a first percentage between an offset amount of a first key point with respect to a lower left vertex of the two-dimensional bounding box of the vehicle i and a left line length of the two-dimensional bounding box of the vehicle i, a second percentage between an offset amount of a second key point with respect to a lower left vertex of the two-dimensional bounding box of the vehicle i and a lower line length of the two-dimensional bounding box of the vehicle i, and a third percentage between an offset amount of a third key point with respect to a lower right vertex of the two-dimensional bounding box of the vehicle i and a right line length of the two-dimensional bounding box of the vehicle i. When the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image are calculated according to the relative positions of the three key points on the two-dimensional surrounding frame of the vehicle i, the parking space identification device calculates the two-dimensional coordinates of each of the first key point, the second key point and the third key point in the coordinate system of the vehicle image according to the first percentage, the second percentage, the third percentage and the coordinates of the four vertexes of the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image. The first key point is a feature point on a left side line of a two-dimensional bounding box of the vehicle i, the second key point is a feature point on a lower side line of the two-dimensional bounding box of the vehicle i, and the third key point is a feature point on a right side line of the two-dimensional bounding box of the vehicle i.
Optionally, in an embodiment of the present invention, the first percentage, the second percentage, and the third percentage obtained through the keypoint identification model are relative positions of three keypoints on a two-dimensional bounding box of the vehicle in a coordinate system of the vehicle image.
In the embodiment of the present application, taking an example of determining three key points on a two-dimensional bounding box of a vehicle through a second deep neural network, the input of which is an image in the two-dimensional bounding box of the vehicle, as shown in fig. 5, the values to be predicted by the second deep neural network are py1, px2 and py3, respectively, where py1 is the result of normalizing the distance from a P1 point to the lower left vertex of the two-dimensional bounding box with the length of the left edge of the two-dimensional bounding box of the vehicle being 1, px2 is the result of normalizing the length from the lower edge of the two-dimensional bounding box of the vehicle being 1, py3 is the result of normalizing the distance from a P3 point to the lower right vertex of the two-dimensional bounding box with the length of the P2 point to the lower left vertex of the two-dimensional bounding box, and py3 is the result of normalizing the distance from the P3 point to the lower right vertex of the two-dimensional bounding box with the length of the right edge of the two-dimensional bounding box being 1, the three values are subjected to percent, A second percentage and a third percentage.
Please refer to fig. 7, which illustrates a schematic diagram of normalized values of regression key points of a second deep neural network according to an embodiment of the present application. As shown in fig. 7, a sub-network for extracting features in the second deep neural network extracts features from the image in the two-dimensional bounding box of the vehicle, and the values predicted by the second deep neural network based on the extracted feature outputs are py1, px2 and py3, respectively, which represent normalized distances between the respective key points and the corresponding vertices of the two-dimensional bounding box.
Taking the second deep neural network as an example of a convolutional neural network, the detection process may refer to a convolution and prediction schematic diagram shown in fig. 8, feature diagram extraction is performed on an image in a two-dimensional surrounding frame of the vehicle, and convolution operation is performed on the extracted feature diagram for several times, so as to continuously reduce the size of the feature diagram, and finally, the 3 predicted values are directly output through a full connection layer.
After the three predicted values are determined, the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle in the coordinate system of the vehicle image can be determined according to the coordinates of the four vertexes of the two-dimensional bounding box in the coordinate system of the vehicle image.
The values of py1, px2 and py3 are in the range of (0, 1). Taking the key point of P2 as an example, if px2 is equal to 0, the representative P2 coincides with the lower left vertex of the two-dimensional bounding box of the vehicle, and if px2 is equal to 1, the representative P2 coincides with the lower right vertex of the two-dimensional bounding box of the vehicle. Different from direct prediction of the pixel distances between the three key points and the four vertexes of the two-dimensional bounding box, the normalization result predicted in the scheme disclosed by the application can well cope with the scale change in different scenes, namely, for different vehicles at different distances, the parameter value ranges corresponding to the three key points are kept consistent.
And step 204, calculating coordinates of three vertexes of the bottom outer frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image.
In this embodiment of the application, coordinates of three vertexes of the bottom bounding box corresponding to the three key points in the vehicle body coordinate system may be determined in combination with calibration data of an image capturing device that captures a vehicle image and two-dimensional coordinates of the three key points in the vehicle image coordinate system.
Optionally, the calibration data may be an internal reference matrix and an external reference matrix of the image acquisition device, for any vehicle i in two adjacent vehicles, the parking space identification device may obtain the internal reference matrix and the external reference matrix of the image acquisition device, and calculate coordinates of three vertexes of a bottom external frame of the vehicle i in the vehicle body coordinate system according to two-dimensional coordinates of three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image, the internal reference matrix and the external reference matrix.
The coordinates in the vehicle body coordinate system may be two-dimensional coordinates or three-dimensional coordinates. For example, when the vehicle is on a horizontal ground, the Z-axis (i.e., the coordinate axis perpendicular to the horizontal plane) coordinate of the bottom subframe of the vehicle in the body coordinate system is always 0, and at this time, when determining the coordinates of the three vertexes of the bottom subframe of the vehicle i in the body coordinate system, only the X-axis coordinate and the Y-axis coordinate in the horizontal plane need to be considered. When the vehicle is on a slope or uneven ground, the Z-axis coordinate of the bottom peripheral frame of the vehicle in the body coordinate system may not be 0, and in this case, when determining the coordinates of the three vertexes of the bottom peripheral frame of the vehicle i in the body coordinate system, the Z-axis coordinate needs to be considered in addition to the X-axis coordinate and the Y-axis coordinate in the horizontal plane.
The internal parameter matrix and the external parameter matrix of the image acquisition equipment can be calibrated in advance and stored in the parking space identification equipment, and when the internal parameter matrix and the external parameter matrix of the image acquisition equipment are acquired, the parking space identification equipment can directly read the internal parameter matrix and the external parameter matrix from the local; or the internal parameter matrix and the external parameter matrix of the image acquisition device can be calibrated in advance and stored in the image acquisition device, and the parking space identification device can request the image acquisition device to acquire the internal parameter matrix and the external parameter matrix.
Refer to fig. 9, which illustrates a camera imaging schematic diagram according to an embodiment of the present application. The hatched surface in the figure represents the plane in which the coordinate system of the vehicle image is located, the x and y coordinate axes are coordinate axes in the coordinate system of the vehicle image, and the coordinates of the identified key point are coordinates (u, v) in the coordinate system of the vehicle image. The key point corresponds to a point in the body coordinate system, which is a point P in fig. 9 (i.e., the above-mentioned point P1, P2, or P3), and whose coordinates in the body coordinate system are (X, Y, Z). After the two-dimensional coordinates (u, v) of the three key points on the bottom external frame are identified through the second deep neural network, the coordinates (X, Y, Z) of the key points in the vehicle body coordinate system can be solved by using an imaging formula of the image acquisition equipment.
Wherein, the imaging formula is as follows:
Figure GDA0002684790950000121
in the above imaging formula, M1 and M2 are respectivelyThe internal reference matrix and the external reference matrix of the image acquisition equipment can be obtained by pre-calibrating the image acquisition equipment. Since the bottom outer frame is a surrounding frame of a projection area of the vehicle on the ground, when the vehicle is on the horizontal ground, Z is 0 for all key points on the bottom outer frame. Thus u, v, M in the above imaging formula1,M2And Z are known quantities, and only three unknown quantities of s, X and Y are available, and the imaging formula can be combined with 3 equations, namely the three unknown variables of s, X and Y have unique solutions.
And step 205, determining coordinates of the free parking space between two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the three vertexes of the bottom external frame of the two adjacent vehicles in the vehicle body coordinate system.
The parking space identification equipment can calculate the coordinate of the fourth vertex of each bottom external frame of the two adjacent vehicles in the vehicle body coordinate system according to the coordinate of the three vertexes of each bottom external frame of the two adjacent vehicles in the vehicle body coordinate system; calculating the coordinates of the central points of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the four vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system; determining the type of the parking space according to an included angle between the long edge direction of the parking space and the connecting line of the central points, wherein the long edge direction of the parking space is determined according to the long edge direction of the bottom external frame of at least one of the two adjacent vehicles, the connecting line of the central points is the connecting line between the central points of the bottom external frames of the two adjacent vehicles, and the type of the parking space comprises a vertical parking space, a parallel parking space or an oblique parking space; determining the number n of the parking spaces according to the type of the parking spaces, wherein the number n of the parking spaces is the number of the parking spaces between the two adjacent vehicles; when n is an integer greater than or equal to 2, determining the coordinates of n-1 equally divided points on the central point connecting line in the vehicle body coordinate system, wherein the n-1 equally divided points divide the central point connecting line into n equal parts; and determining the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system according to the long side direction of the parking spaces and the coordinates of the n-1 equally divided points in the vehicle body coordinate system.
Taking any one of two adjacent vehicles as an example, after obtaining the coordinates of the three vertexes of the bottom external frame of the vehicle i in the vehicle body coordinate system, the parking space identification device may calculate the coordinates of the fourth vertex of the bottom external frame of the vehicle i in the vehicle body coordinate system according to the coordinates of the three vertexes of the bottom external frame of the vehicle i in the vehicle body coordinate system.
Fig. 10 is a schematic view illustrating a vehicle bottom peripheral frame according to an embodiment of the present invention, wherein the vehicle bottom peripheral frame is a rectangle, and the positions of four vertices are distributed as shown in fig. 10, where the coordinates of P1, P2, and P3 are known, the point C is the center point of the bottom peripheral frame, the coordinate of P4 is unknown, and the coordinate of P1 in the vehicle body coordinate system is (X1 is) shown as (X)1,Y1,Z1) The coordinate of P2 in the vehicle body coordinate system is (X)2,Y2,Z2) The coordinate of P3 in the vehicle body coordinate system is (X)3,Y3,Z3) The coordinate of P4 in the vehicle body coordinate system is (X)4,Y4,Z4) And the coordinate of the point C in the vehicle body coordinate system is (X)C,YC,ZC)。
The center point C coordinate of the rectangle can be calculated by the P1 coordinate and the P3 coordinate:
Figure GDA0002684790950000122
likewise, the center point C coordinates may also be calculated by P2 and P4:
Figure GDA0002684790950000131
by combining the two formulas, the coordinates of the point P4 in the body coordinate system can be calculated based on the following formula.
Figure GDA0002684790950000132
So far, the coordinates of the 4 vertexes of the vehicle bottom outer frame in the image in the vehicle body coordinate system are all successfully calculated.
Wherein, the long limit direction of above-mentioned parking stall indicates the direction on two longer limits in the parking stall, and the parking stall central point indicates the geometric centre point of a parking stall.
The vertical parking spaces are parking spaces with the long edge directions perpendicular to the lane, the parallel parking spaces are parking spaces with the long edge directions parallel to the lane, the oblique parking spaces are parking spaces with the long edge directions parallel to the lane, and the included angles between the long edge directions of the oblique parking spaces and the lane directions are (30 degrees, 60 degrees) or (120 degrees and 150 degrees) within the interval under the normal condition.
In practical application, for different types of parking spaces, the distances between the central points of two adjacent parking spaces are different, for example, when two adjacent parking spaces are vertical parking spaces, the two vertical parking spaces share one long side, and at this time, the distance between the central points of the two adjacent parking spaces is the width of one parking space; for another example, when two adjacent parking spaces are parallel parking spaces, the two parallel parking spaces share a short edge, and at this time, the distance between the central points of the two adjacent parking spaces is the length of one parking space; for another example, when two adjacent parking spaces are oblique parking spaces, the distance between the central points of the two oblique parking spaces is between the length of one parking space and the width of one parking space, and the specific numerical value depends on the size of the included angle between the oblique parking space and the lane. Therefore, after the coordinates of the four vertexes of the bottom outer frames of the two adjacent vehicles in the vehicle body coordinate system are determined, when the distance between the center points of the bottom outer frames of the two adjacent vehicles is calculated, it is required to determine that several bisectors can be divided on the connecting line of the center points of the bottom outer frames of the two adjacent vehicles according to the parking space types, and when the number of the divisible bisectors is greater than or equal to 1 (i.e., n is greater than or equal to 2), it can be determined that there is a free parking space between the two adjacent vehicles, and then the position information of the free parking space can be determined according to the bisectors.
Optionally, should confirm the parking stall type according to the contained angle between the long limit direction of parking stall and the central point line, include: when the included angle is 90 degrees, determining that the parking space type is a vertical parking space; when the included angle is 0 degree, determining that the parking space type is a parallel parking space; when the included angle is within the range of (30 degrees, 60 degrees) or (120 degrees, 150 degrees), the parking space type is determined to be an oblique parking space.
Optionally, should confirm parking stall quantity n according to this parking stall type, include: when the parking space type is a vertical parking space, n is the maximum integer less than or equal to d/w; when the parking space type is a parallel parking space, n is the maximum integer less than or equal to d/h; when the parking space type is an oblique parking space, n is the maximum integer less than or equal to d/(w/sin alpha); wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
The values of w and h can be preset in the parking space recognition device, specifically, w and h can be set by a developer or a manager according to the length and width of the preset standard parking space, that is, w is the width of the preset standard parking space, and h is the length of the preset standard parking space.
Fig. 11 is a flowchart of identifying an empty parking space according to an embodiment of the present application. As shown in fig. 11, the free parking space recognition process is performed based on the input recognition result of the vehicle bottom peripheral frame, and the recognition process is mainly divided into two steps: and identifying the type and the position of the idle parking space. The specific implementation mode is as follows:
step 1 (identification of type of idle parking space): determining the parking place category according to the included angle between the connecting line of the central points of the bottom external frames of two adjacent vehicles and the long edge direction of the bottom external frame;
step 2 (identification of the position of the idle parking space): and calculating the coordinates of the four vertexes of the free parking space by utilizing the coordinates of the four vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system.
The embodiments of the present application are described with respect to the above two main steps, respectively.
Step 1, identifying the type of the idle parking space.
The type of the idle parking space is mainly judged according to the included angle between the long edge direction of the external frame of the vehicle and the connecting line of the central points of the external frames at the bottoms of the two adjacent vehicles. Specifically, please refer to fig. 12, which shows a flowchart of identifying the type of the vacant parking space according to the embodiment of the present application. This is explained in detail below with reference to the flow chart shown in fig. 12:
(a) and calculating key information of the space positions of the adjacent vehicles, namely the sideline direction and the center point coordinate of the bottom outer frames of the two adjacent vehicles by using the identification result (namely the result obtained in the step 204) of the bottom outer frame detection of the vehicles. Specifically, the bottom peripheral frame of the vehicle is a rectangular frame, and is represented and described by coordinates of four vertices. For example, in fig. 10, P1, P2, P3, and P4 are 4 vertexes of the bounding box, the bounding line directions are divided into a long side direction and a short side direction, and are averaged from the two corresponding bounding line directions, and taking fig. 10 as an example, the long side direction may be an average value of a direction corresponding to a straight line P1P4 (which may be represented by an angle between the straight line and the lane) and a direction corresponding to a straight line P2P3, and the short side direction may be an average value of a direction corresponding to a straight line P1P2 (which may be represented by an angle between the straight line and the lane) and a direction corresponding to a straight line P3P 4.
(b) And calculating the weighted average value of the respective long edge directions of the bottom external frames of the two vehicles by using the recognition confidence coefficients of the bottom external frames of the two adjacent vehicles as weights to obtain the long edge directions of the parking spaces of the possible idle parking spaces.
The recognition confidence of the bottom bounding box of the vehicle may be a probability that a recognition result obtained by recognizing the image in the two-dimensional bounding box of the vehicle through the keypoint recognition model (i.e., a first percentage, a second percentage, and a third percentage corresponding to three keypoints on the two-dimensional bounding box of the vehicle, respectively) is an accurate result.
Specifically, in the above step 203 of the embodiment of the present application, when the second deep neural network identifies and outputs the values of py1, px2, and py3 corresponding to the three key points on the two-dimensional bounding box of the vehicle, the probabilities that the three values are accurate values, that is, the above identification confidence degrees, are also output at the same time. In the step (b), when the side line direction of the possible vacant parking space is calculated, the recognition confidence degrees of the numerical values of py1, px2 and py3 corresponding to the three key points on the bottom outer frames of the two adjacent vehicles are respectively used as weights, and the long side directions of the bottom outer frames of the two adjacent vehicles are weighted and averaged to obtain the long side directions of the possible vacant parking spaces.
In another possible implementation manner, the parking space recognition device may also determine the long side direction of the bottom external frames of two adjacent vehicles as the long side direction of the parking space.
(c) And calculating an included angle alpha between the long edge direction of the parking space of the possible vacant parking space and the central point connecting line of the bottom external frames of the two adjacent vehicles according to the long edge direction of the parking space of the possible vacant parking space, and determining the parking space type of the possible vacant parking space according to the size of the included angle alpha. Please refer to fig. 13, which illustrates a schematic diagram of a parking space type according to an embodiment of the present application.
As shown in fig. 13, when the included angle α is close to 90 °, that is, the difference between the included angle α and 90 ° is within a predetermined range, it may be determined that the type of the vacant parking space between two adjacent vehicles is a vertical parking space.
When the included angle alpha is close to 0 degrees, namely the difference value between the included angle alpha and the included angle 0 degrees is within a preset range, the type of the vacant parking space between two adjacent vehicles can be determined to be the parallel parking space.
When the included angle alpha is within the interval (30 degrees, 60 degrees) or (120 degrees, 150 degrees), the type of the free parking space between two adjacent vehicles can be determined to be an oblique parking space.
Optionally, if the included angle α does not belong to any of the above situations, it may be considered that the operation for identifying the parking space type is failed, and the parking space type of the vacant parking space that may exist is output as empty.
And 2, identifying the position of the idle parking space.
Please refer to fig. 14, which shows a flowchart of identifying a vacant parking space position according to an embodiment of the present application, and as shown in fig. 14, the detailed steps of the vacant parking space position identification are explained as follows:
(a) calculating the equal division interval t of the central point connecting line according to the parking place types identified in the step 1;
wherein, for a vertical parking space, t is w; for parallel slots, t equals h, and for diagonal slots, t equals (w/sin α).
Optionally, if the parking space type is identified as empty, it may be determined that no empty parking space exists between the bottom external frames of the two adjacent vehicles.
(b) And dividing the length d of the connecting line of the central points of the bottom external frames of two adjacent vehicles by the equal interval t to obtain n, wherein n is the maximum integer less than or equal to d/t. Whether an idle parking space is contained between two adjacent vehicles is judged according to the following criteria:
when n is less than or equal to 1, no idle parking space exists between two adjacent vehicles;
when n is larger than 1, n-1 idle parking spaces exist between two adjacent vehicles, and the corresponding equally divided point is the central point of the idle parking spaces.
Please refer to fig. 15 and 16, wherein fig. 15 shows a schematic diagram illustrating that no free space exists between two adjacent vehicles, and fig. 16 shows a schematic diagram illustrating that a free space exists between two adjacent vehicles. In fig. 15, for a vertical space, n is d/w is 1, and thus it is determined that there is no free space, and in fig. 16, for a vertical space, n is d/w is 3, and thus it is determined that there are two free spaces.
(c) For the situation that there is an empty space, please refer to fig. 17, which shows a schematic diagram of determining fixed point coordinates according to an embodiment of the present application, and as shown in fig. 17, starting from a central point P0 of the empty space, the coordinates of four vertexes of the empty space P1, P2, P3, and P4 are obtained by shifting distances h/2 and w/2 along the long side direction and the short side direction of the empty space, respectively.
Optionally, when determining the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system according to the parking space long side direction and the coordinates of the n-1 equally divided points in the vehicle body coordinate system, the parking space identification device may move h/2 in the forward and reverse directions corresponding to the parking space long side direction from the coordinates of the equally divided point i in the vehicle body coordinate system for any equally divided point i in the n-1 equally divided points, and obtain the coordinates of the middle points of the two short sides of the free parking spaces corresponding to the equally divided point i in the vehicle body coordinate system; starting from the coordinates of the middle points of the two short sides of the free parking spaces corresponding to the equal division point i in the vehicle body coordinate system, respectively moving by w/2 along the positive and negative directions corresponding to the short side directions of the free parking spaces corresponding to the equal division point i, and obtaining the coordinates of the four vertexes of the free parking spaces corresponding to the equal division point i in the vehicle body coordinate system, wherein the short side directions of the free parking spaces corresponding to the equal division point i are perpendicular to the long side directions of the parking spaces. Or the parking space identification device can also move w/2 respectively in the front and back directions corresponding to the short side direction of the free parking space corresponding to the bisector point i from the coordinate of the bisector point i in the vehicle body coordinate system aiming at any bisector point i in n-1 bisectors, and obtain the coordinate of the middle point of the two long sides of the free parking space corresponding to the bisector point i in the vehicle body coordinate system; and starting from the coordinates of the middle points of the two long sides of the idle parking spaces corresponding to the equal division point i in the vehicle body coordinate system, respectively moving the middle points by h/2 along the positive and negative directions corresponding to the long side directions of the parking spaces, and obtaining the coordinates of the four vertexes of the idle parking spaces corresponding to the equal division point i in the vehicle body coordinate system.
Taking fig. 17 as an example, in practical application, the parking space recognition device may start from the central point P0, first shift h/2 along the up-down direction (i.e. the positive and negative directions of the long side direction of the parking space) respectively, reach the midpoint of the two short sides of the vacant parking space, shift w/2 along the left-right direction (i.e. the positive and negative directions of the short side direction of the parking space) respectively from the midpoint of the two short sides, reach the four vertexes of the vacant parking space, and obtain the coordinates of the four vertexes in the vehicle body coordinate system.
Or, the parking space identification device may also start from the central point P0, first shift w/2 along the left-right direction, respectively, to reach the midpoint of the two long edges of the vacant parking space, then shift h/2 along the up-down direction, respectively, from the midpoint of the two long edges, to reach the four vertexes of the vacant parking space, and obtain the coordinates of the four vertexes in the vehicle body coordinate system.
(d) And finally, outputting the coordinate recognition results of the four vertexes, and sending the coordinate recognition results to a vehicle control module so that the vehicle control module can control the motion of the vehicle, such as automatic parking and the like. The output format of the coordinate recognition result of the four vertices may be an 8-dimensional column vector, where each element in the 8-dimensional column vector is the X-axis coordinate and the Y-axis coordinate of the four vertices (the Z-axis coordinates of the four vertices are all 0).
To sum up, in the method shown in the embodiment of the present application, by identifying the two-dimensional surrounding frames of two adjacent vehicles in the two-dimensional vehicle image, determining the coordinates of three vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the two-dimensional surrounding frames of the two adjacent vehicles, and determining the free parking space between the two adjacent vehicles according to the coordinates of the three vertexes of the bottom external frame in the vehicle body coordinate system, it is only necessary to identify the two adjacent vehicles of the two-dimensional surrounding frames in the vehicle image, and it is possible to determine the free parking space possibly existing between the two vehicles, it is not necessary to identify the vehicle position line on the ground, and even it is not necessary to identify the vehicle position line on the ground, and the identification accuracy of the free parking space is higher; in addition, according to the method, the identification of the idle parking spaces can be realized only by acquiring the vehicle images containing at least two adjacent vehicles, so that the idle parking spaces among the remote vehicles can be identified, the identification distance of the idle parking spaces is long, and the identification effect is better.
In the embodiment shown in fig. 2, steps 203 to 205 are only described by taking the at least one characteristic location point on the bottom borders of two adjacent vehicles as three vertices of the bottom borders of two adjacent vehicles that are visible in the vehicle image, and in practical applications, the at least one characteristic location point on the bottom borders of two adjacent vehicles is not limited to the three vertices of the bottom borders that are visible in the vehicle image.
For example, the at least one characteristic position point may include three vertexes of a bottom outline of the vehicle visible in the vehicle image and at least one other characteristic position point than the three vertexes, so as to improve the identification accuracy of the position information of the vacant parking space. For example, the at least one other feature position point other than the three vertices may include a center point of a license plate of the vehicle, a middle point of an edge of the bottom outline of the vehicle, a predetermined-ratio bisector of the edge of the bottom outline of the vehicle, or four corner points of the vehicle body, which are easily distinguishable position points.
Alternatively, the at least one characteristic position point may include only one or two position points, for example, the at least one characteristic position point may include one or two vertexes of the bottom outline of the vehicle visible in the vehicle image, or the at least one characteristic position point may include one or two easily identifiable position points such as a license plate center point of the vehicle, a middle point of an edge of the bottom outline of the vehicle, a predetermined-proportion bisector of an edge of the bottom outline of the vehicle, or four corner points of the vehicle body. Wherein, whether a parking space exists between two adjacent vehicles can be roughly determined through one or two position points. For example, when at least one characteristic position point only includes one vertex of the bottom outline of the vehicle visible in the vehicle image, the parking space recognition device may calculate a distance d1 of the vertex of the bottom outline of each of the two adjacent vehicles visible in the vehicle image in the vehicle body coordinate system, and determine whether there is a free parking space between the two adjacent vehicles according to a size relationship between the distance d1 and a preset distance threshold d0, for example, when d1 is greater than or equal to d0, the parking space recognition device may determine that there is a free parking space between the two adjacent vehicles, where the value of d0 may be set to 3 h. Alternatively, when the at least one characteristic position point includes two vertexes of the bottom outline of the vehicle visible in the vehicle image, the parking space recognition device may calculate a distance d2 between a midpoint of the two vertexes of the bottom outline of each of the two adjacent vehicles visible in the vehicle image in the vehicle coordinate system, and determine whether there is a free parking space between the two adjacent vehicles according to a magnitude relationship between the distance d2 and a preset distance threshold d 0.
In another possible implementation manner, the parking space recognition device may also recognize the parking space according to the spatial sensing data corresponding to the space including the vehicle, that is, the parking space recognition device may determine, according to the coordinates of each vehicle in the vehicle body coordinate system in the spatial sensing data, the coordinates of the bottom outer frame of each vehicle on the ground in the vehicle body coordinate system, and determine the free parking space between two adjacent vehicles according to the coordinates of the bottom outer frames of two adjacent vehicles on the ground in the vehicle body coordinate system.
Please refer to fig. 18, which illustrates a flowchart of a method for determining an empty space according to an exemplary embodiment of the present application. The method may be used in the space determining device 120 of the system shown in fig. 1. As shown in fig. 18, the method for determining the free parking space may include:
step 1801, spatial perception data including at least two vehicles is obtained.
In this embodiment of the application, the spatial perception data may be radar point cloud data or laser point cloud data. After receiving the radar point cloud data or the laser point cloud data sent by the data acquisition equipment, the parking space identification equipment can judge whether at least two vehicles exist in the corresponding space according to the radar point cloud data or the laser point cloud data, and if so, the space perception data is determined to be acquired.
For example, the spatial perception data such as radar point cloud data or laser point cloud data generally consists of spatial coordinate points obtained by detection of radar detection equipment or laser detection equipment, the spatial coordinate points form a contour of an object in a space, and the parking space identification equipment can determine whether the corresponding object is a vehicle according to the contour of each object formed by the spatial perception data.
Step 1802, obtaining position information of at least one characteristic position point on respective bottom outer frames of two adjacent vehicles of the at least two vehicles according to the spatial perception data.
In the embodiment of the application, after the parking space identification device obtains the spatial perception data, after the contour of each vehicle in the space corresponding to the spatial perception data is determined, the orthographic projection of each vehicle on the ground is determined according to the contour of each vehicle, the position information of at least one characteristic position point on the bottom outer frame of each vehicle is determined according to the orthographic projection of each vehicle on the ground, and the bottom outer frame of each vehicle is a rectangular frame containing the orthographic projection of the contour of each vehicle on the ground.
And 1803, determining an empty parking space between the two adjacent vehicles according to the position information of the at least one characteristic position point on the bottom external frame of the two adjacent vehicles.
The execution process of step 1803 is similar to that of step 205 in the embodiment shown in fig. 2, and is not described here again.
In the embodiment of the application, the data acquisition device can acquire spatial perception data around a vehicle, and send the spatial perception data around the vehicle to the parking space determination device, after the parking space determination device receives the spatial perception data acquired by the image acquisition device, the parking space determination device determines an idle parking space between two vehicles by identifying the position information of at least one characteristic position point on the bottom external frame of two adjacent vehicles in the spatial perception data and according to the position information of at least one characteristic position point on the bottom external frame, and only two adjacent vehicles capable of identifying the bottom external frame exist in the spatial perception data, the idle parking space between the two vehicles can be determined, so that the identification effect of the idle parking space is improved.
Fig. 19 is a schematic structural diagram of a parking space determining device 1900 according to an exemplary embodiment of the present application, where the parking space determining device 1900 may be implemented as the parking space determining device 120 in the system shown in fig. 1. As shown in fig. 19, the space determining apparatus 1900 may include: a processor 1901 and a memory 1903.
The processor 1901 may include one or more processing units, which may be a Central Processing Unit (CPU), a Network Processor (NP), or the like.
The memory 1903 may be used for programmable instructions that may be called by the processor 1901. In addition, various service data or user data may be stored in the memory 1903. The programmable instruction can comprise an acquisition module, an enclosure acquisition module, a position determination module and a parking space determination module;
wherein the acquisition module is invoked by the processor 1901 to perform the functions associated with acquiring vehicle data information (i.e., vehicle images or spatially aware data) as in the embodiments of fig. 2 or 18.
The bounding box determination module is invoked by the processor 1901 to perform the functions associated with determining a two-dimensional bounding box for a vehicle as in the embodiment shown in FIG. 2.
The position determination module is invoked by the processor 1901 to perform the functions associated with determining position information for at least one feature location point on the respective bottom peripheral frames of two adjacent vehicles as in the embodiment shown in fig. 2.
The space determination module is invoked by the processor 1901 to perform the function related to determining a free space between two adjacent vehicles in the embodiment shown in fig. 2 or fig. 18.
Optionally, the parking space determining device 1900 may further include a communication interface 1904, and the communication interface 1904 may include a network interface. The network interface is used for connecting the image acquisition device, and optionally, the network interface can also be connected with other devices. In particular, the network interface may comprise a wired network interface, such as an ethernet interface or a fiber optic interface, or the network interface may comprise a wireless network interface, such as a wireless local area network interface or a cellular mobile network interface. The parking space determination device 1900 communicates with an image capture device or other devices via the network interface 1904.
Alternatively, the processor 1901 may be connected to the memory 1903 and the communication interface 1904 by a bus.
Optionally, the parking space determining device 1900 may further include an output device 1905 and an input device 1907. An output device 1905 and an input device 1907 are coupled to the processor 1901. The output device 1905 may be a display for displaying information, a power amplifier device for playing sound, etc., and the output device 1905 may further include an output controller for providing output to the display, the power amplifier device, etc. The input device 1907 may be a device such as a mouse, keyboard, electronic stylus, or touch panel for user input of information, and the input device 1907 may also include an output controller for receiving and processing input from the mouse, keyboard, electronic stylus, or touch panel.
Fig. 20 is a schematic structural diagram of a model training apparatus 2000 according to an exemplary embodiment of the present application. As shown in fig. 20, the model training apparatus 2000 may include: a processor 2001 and a memory 2003.
The processor 2001 may include one or more processing units, which may be a Central Processing Unit (CPU) or a Network Processor (NP), among others.
Memory 2003 may be used for programmable instructions that may be called for by processor 2001. In addition, various types of service data or user data may be stored in the memory 2003. The programmable instructions may include an acquisition module and a training module;
wherein the acquisition module is invoked by the processor 2001 to perform the functions described in relation to acquiring the second sample image, as illustrated in figure 6.
The training module is invoked by the processor 2001 to perform the functions related to training the keypoint recognition model as shown in fig. 6.
Optionally, the model training apparatus 2000 may further include a communication interface 2004, and the communication interface 2004 may include a network interface. The network interface may connect other devices. In particular, the network interface may comprise a wired network interface, such as an ethernet interface or a fiber optic interface, or the network interface may comprise a wireless network interface, such as a wireless local area network interface or a cellular mobile network interface.
Alternatively, the processor 2001 may be connected to the memory 2003 and the communication interface 2004 by a bus.
Optionally, the model training device 2000 may further comprise an output device 2005 and an input device 2007. An output device 2005 and an input device 2007 are connected to the processor 2001. The output device 2005 may be a display for displaying information, a power amplifier device for playing sound, and the like, and the output device 2005 may further include an output controller for providing output to the display screen, the power amplifier device, and the like. Input device 2007 may be a device such as a mouse, keyboard, electronic stylus, or touch panel for user input information, and input device 2007 may also include an output controller for receiving and processing input from a device such as a mouse, keyboard, electronic stylus, or touch panel.
Fig. 21 is a block diagram of an apparatus for determining an empty space according to an exemplary embodiment of the present application, where the apparatus for determining an empty space may be implemented as part or all of a space determining device, which may be the space determining device 120 in the embodiment shown in fig. 1. The device for determining the free parking space may include: an acquisition unit 2101, an enclosure determination unit 2102, a position determination unit 2103, and a parking space determination unit 2104;
the obtaining unit 2101 may be used to implement the function of obtaining vehicle data information (i.e. vehicle image or spatial perception data) in the method for determining a vacant parking space provided in the embodiment shown in fig. 2 or fig. 18.
The bounding box determining unit 2102 may be used to implement the function of determining a two-dimensional bounding box of a vehicle in the method for determining a free parking space provided in the embodiment shown in fig. 2.
The position determining unit 2103 may be used to implement the function of determining the position information of at least one characteristic position point on the bottom peripheral frame of each of two adjacent vehicles in the method for determining empty parking spaces provided in the embodiment shown in fig. 2.
The space determining unit 2104 may be used to implement the function of determining a free space between two adjacent vehicles in the method for determining a free space provided in the embodiment shown in fig. 2 or fig. 18.
Fig. 22 is a block diagram of a model training apparatus provided in an exemplary embodiment of the present application, which may be implemented as part or all of a model training device through a combination of hardware circuits or software hardware. The model training apparatus may include: an acquisition unit 2201 and a training unit 2202;
the acquiring unit 2201 may be used to implement the function of acquiring the second sample image in the model training method shown in fig. 6.
The training unit 2202 can be used to implement a function of training a keypoint identification model in the model training method shown in fig. 6.
It should be noted that: the device for determining the vacant parking space provided by the above embodiment is illustrated by only dividing the above functional units when recognizing the vacant parking space and the model training device is training the model, and in practical application, the function distribution may be completed by different functional units as needed, that is, the internal structure of the device is divided into different functional units to complete all or part of the above-described functions. In addition, the device for determining the vacant parking space and the embodiment of the method for determining the vacant parking space provided by the embodiment belong to the same concept, the embodiment of the model training device and the embodiment of the model training method belong to the same concept, and the specific implementation process is detailed in the embodiment of the method and is not repeated herein.
Embodiments of the present application also provide a computer program, which may include at least one programmable instruction, and when the computer program runs on a computer, the computer may execute the at least one programmable instruction to implement all or part of the steps of the method shown in fig. 2, fig. 6 or fig. 18.
The above example numbers of the present application are for description only and do not represent the merits of the examples.
It will be understood by those skilled in the art that all or part of the steps executed by the processor to implement the above embodiments may be implemented by hardware, or may be implemented by programmable instructions stored in a computer-readable storage medium, that is, the processor invokes the programmable instructions stored in the computer-readable storage medium, so that the processor executes the method for determining the free space as in the above embodiments shown in fig. 2 or fig. 18, or the processor executes the method for training the model as shown in fig. 6, where the above mentioned computer-readable storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only one specific embodiment that can be realized by the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can change or replace the solution obtained by the claims within the technical scope disclosed by the present application, and the technical scope of the present application shall be covered by the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (30)

1. A method for determining an empty parking space, the method comprising:
acquiring a vehicle image containing at least two vehicles;
determining respective two-dimensional surrounding frames of two adjacent vehicles in the at least two vehicles according to the vehicle images, wherein the two-dimensional surrounding frames are circumscribed rectangular frames of the corresponding vehicles in the vehicle images;
determining position information of at least one characteristic position point on a bottom external frame of each two adjacent vehicles according to the two-dimensional surrounding frames of each two adjacent vehicles, wherein the bottom external frame is a circumscribed rectangular frame of an orthographic projection of the corresponding vehicle on the ground, and the characteristic position point is used for indicating a point on a preset position of the corresponding bottom external frame;
and determining the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frame of the two adjacent vehicles.
2. The method of claim 1, wherein the at least one feature location point includes three vertices of a corresponding bottom bounding box, the three vertices being indicative of three vertices of a bottom bounding box visible in the vehicle image, the location information being indicative of coordinates in a body coordinate system;
wherein the determining the position information of at least one characteristic position point on the bottom external frame of each of the two adjacent vehicles according to the two-dimensional surrounding frames of each of the two adjacent vehicles comprises:
determining coordinates of three vertexes of the bottom external frame of each two adjacent vehicles in the vehicle body coordinate system according to the two-dimensional surrounding frames of each two adjacent vehicles;
the determining the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frame of the two adjacent vehicles comprises:
and determining the coordinates of the free parking space between the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the three vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system.
3. The method of claim 2, wherein the determining coordinates of three vertices of the bottom bounding box of each of the two adjacent vehicles in the body coordinate system according to the two-dimensional bounding box of each of the two adjacent vehicles comprises:
determining relative positions of three key points on a two-dimensional bounding box of the vehicle i through a key point identification model aiming at any vehicle i in the two adjacent vehicles, wherein the three key points are three feature points corresponding to three vertexes of a bottom external frame of the vehicle i; the key point identification model is obtained in advance according to sample image training, and the sample image is an image with three key points marked on a two-dimensional surrounding frame in advance;
calculating two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in a coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional surrounding frame of the vehicle i;
and calculating the coordinates of three vertexes of the bottom external frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image.
4. The method of claim 3, wherein the relative positions of the three keypoints on the two-dimensional bounding box of vehicle i comprises: a first percentage between an offset amount of a first key point relative to a lower left vertex of the two-dimensional bounding box of the vehicle i and a left linear length of the two-dimensional bounding box of the vehicle i, a second percentage between an offset amount of a second key point relative to the lower left vertex of the two-dimensional bounding box of the vehicle i and a lower linear length of the two-dimensional bounding box of the vehicle i, and a third percentage between an offset amount of a third key point relative to the lower right vertex of the two-dimensional bounding box of the vehicle i and a right linear length of the two-dimensional bounding box of the vehicle i, wherein the first key point is a feature point on a left edge line of the two-dimensional bounding box of the vehicle i, the second key point is a feature point on a lower edge line of the two-dimensional bounding box of the vehicle i, and the third key point is a feature point on a right edge line of the two-dimensional bounding box of the vehicle i;
wherein the calculating the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional bounding box of the vehicle i comprises:
calculating the two-dimensional coordinates of each of the first keypoint, the second keypoint, and the third keypoint in the coordinate system of the vehicle image according to the first percentage, the second percentage, the third percentage, and the coordinates of the four vertices of the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image.
5. The method of claim 3, wherein the calculating coordinates of the three vertices of the bottom bounding box of the vehicle i in the body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image comprises:
acquiring an internal reference matrix and an external reference matrix of image acquisition equipment, wherein the image acquisition equipment is equipment for acquiring the vehicle image;
and calculating the coordinates of three vertexes of the bottom outer frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image, the inner reference matrix and the outer reference matrix.
6. The method according to claim 4, wherein the calculating coordinates of three vertices of the bottom bounding box of the vehicle i in the body coordinate system according to two-dimensional coordinates of three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image comprises:
acquiring an internal reference matrix and an external reference matrix of image acquisition equipment, wherein the image acquisition equipment is equipment for acquiring the vehicle image;
and calculating the coordinates of three vertexes of the bottom outer frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image, the inner reference matrix and the outer reference matrix.
7. The method according to any one of claims 2 to 6, wherein the determining coordinates of the free space between the two adjacent vehicles in the body coordinate system according to the coordinates of the three vertexes of the bottom peripheral frames of the two adjacent vehicles in the body coordinate system comprises:
calculating the coordinates of the fourth vertexes of the respective bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the three vertexes of the respective bottom external frames of the two adjacent vehicles in the vehicle body coordinate system;
calculating coordinates of the central points of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the four vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system;
determining the type of the parking space according to an included angle between the long edge direction of the parking space and the connecting line of the central points, wherein the long edge direction of the parking space is determined according to the long edge direction of the bottom external frame of at least one of the two adjacent vehicles, the connecting line of the central points is the connecting line between the central points of the bottom external frames of the two adjacent vehicles, and the type of the parking space comprises a vertical parking space, a parallel parking space or an oblique parking space;
determining the number n of parking spaces according to the parking space type, wherein the number n of the parking spaces is the number of the parking spaces between the two adjacent vehicles;
when n is an integer greater than or equal to 2, determining coordinates of n-1 equally-divided points on the central point connecting line in the vehicle body coordinate system, wherein the n-1 equally-divided points divide the central point connecting line into n equal parts;
and determining the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system according to the long side direction of the parking spaces and the coordinates of the n-1 equally divided points in the vehicle body coordinate system.
8. The method according to claim 7, wherein the determining coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system according to the parking space long side direction and the coordinates of the n-1 equally divided points in the vehicle body coordinate system comprises:
aiming at any bisector i in the n-1 bisectors, respectively moving h/2 in the front direction and the back direction corresponding to the long side direction of the parking space from the coordinate of the bisector i in the vehicle body coordinate system, and obtaining the coordinate of the middle point of the two short sides of the idle parking space corresponding to the bisector i in the vehicle body coordinate system; starting from the coordinates of the middle points of the two short sides of the free parking space corresponding to the bisector point i in the vehicle body coordinate system, respectively moving the middle points by w/2 along the positive and negative directions corresponding to the short side directions of the free parking space corresponding to the bisector point i, and obtaining the coordinates of the four vertexes of the free parking space corresponding to the bisector point i in the vehicle body coordinate system; h is the length of a preset parking space, w is the width of the preset parking space, and the short side direction of the idle parking space corresponding to the equal division point i is perpendicular to the long side direction of the parking space;
or,
for any bisector i in the n-1 bisectors, respectively moving the bisector i by w/2 in the front and back directions corresponding to the short side directions of the free parking spaces corresponding to the bisector i from the coordinates of the bisector i in the vehicle body coordinate system, and obtaining the coordinates of the middle points of the two long sides of the free parking spaces corresponding to the bisector i in the vehicle body coordinate system; starting from the coordinates of the middle point of the two long sides of the free parking space corresponding to the equal division point i in the vehicle body coordinate system, respectively moving the middle point by h/2 along the positive and negative directions corresponding to the long side directions of the parking space, and obtaining the coordinates of the four vertexes of the free parking space corresponding to the equal division point i in the vehicle body coordinate system; wherein, h is for predetermineeing parking stall length, and w is for predetermineeing the parking stall width, the minor face direction of the idle parking stall that the equipartition point i corresponds with the long limit direction of parking stall is perpendicular.
9. The method of claim 7, wherein determining the type of the parking space according to an included angle between a long side direction of the parking space and the connection line of the central points comprises:
when the included angle is 90 degrees, determining that the parking space type is a vertical parking space;
when the included angle is 0 degree, determining that the parking space type is a parallel parking space;
and when the included angle is in the interval of (30 degrees, 60 degrees) or (120 degrees, 150 degrees), determining that the parking space type is an oblique parking space.
10. The method of claim 8, wherein determining the type of the parking space according to an included angle between a long side direction of the parking space and the connection line of the central points comprises:
when the included angle is 90 degrees, determining that the parking space type is a vertical parking space;
when the included angle is 0 degree, determining that the parking space type is a parallel parking space;
and when the included angle is in the interval of (30 degrees, 60 degrees) or (120 degrees, 150 degrees), determining that the parking space type is an oblique parking space.
11. The method of claim 7, wherein determining the number of slots n according to the slot type comprises:
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
12. The method of claim 8, wherein determining the number of slots n according to the slot type comprises:
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
13. The method of claim 9, wherein determining the number of slots n according to the slot type comprises:
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
14. The method of claim 10, wherein determining the number of slots n according to the slot type comprises:
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
15. An apparatus for determining an empty space, the apparatus comprising:
an acquisition unit configured to acquire a vehicle image including at least two vehicles;
an enclosing frame determining unit, configured to determine, according to the vehicle image, two-dimensional enclosing frames of two adjacent vehicles of the at least two vehicles, where the two-dimensional enclosing frames are circumscribed rectangular frames of corresponding vehicles in the vehicle image;
the position determining unit is used for determining position information of at least one characteristic position point on a bottom external frame of each two adjacent vehicles according to the two-dimensional surrounding frames of the two adjacent vehicles, the bottom external frame is an external rectangular frame of orthographic projection of the corresponding vehicle on the ground, and the characteristic position point is used for indicating a point on a preset position of the bottom external frame;
and the parking space determining unit is used for determining the free parking space between the two adjacent vehicles according to the position information of at least one characteristic position point on the bottom external frames of the two adjacent vehicles.
16. The apparatus of claim 15, wherein the at least one feature location point comprises three vertices of a corresponding bottom bounding box, the three vertices being indicative of three vertices of a bottom bounding box visible in the vehicle image, the location information being indicative of coordinates in a body coordinate system;
the position determining unit is specifically configured to determine, according to the two-dimensional surrounding frames of the two adjacent vehicles, coordinates of three vertexes of the bottom external frame of the two adjacent vehicles in the vehicle body coordinate system;
the parking space determining unit is specifically configured to determine coordinates of an idle parking space between two adjacent vehicles in the vehicle body coordinate system according to coordinates of three vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system.
17. The device according to claim 16, characterized in that the position determination unit, in particular for,
for any vehicle i in the two adjacent vehicles, determining the relative positions of three key points on a two-dimensional bounding box of the vehicle i through a key point identification model; the three key points are three feature points corresponding to three vertexes of a bottom extension frame of the vehicle i; the key point identification model is obtained by performing machine training in advance according to a sample image, and the sample image is an image with three key points marked on a two-dimensional surrounding frame in advance;
calculating two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in a coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional surrounding frame of the vehicle i;
and calculating the coordinates of three vertexes of the bottom external frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image.
18. The apparatus of claim 17, wherein the relative positions of the three keypoints on the two-dimensional bounding box of vehicle i comprises: a first percentage between an offset amount of a first key point relative to a lower left vertex of the two-dimensional bounding box of the vehicle i and a left linear length of the two-dimensional bounding box of the vehicle i, a second percentage between an offset amount of a second key point relative to the lower left vertex of the two-dimensional bounding box of the vehicle i and a lower linear length of the two-dimensional bounding box of the vehicle i, and a third percentage between an offset amount of a third key point relative to the lower right vertex of the two-dimensional bounding box of the vehicle i and a right linear length of the two-dimensional bounding box of the vehicle i, wherein the first key point is a feature point on a left edge line of the two-dimensional bounding box of the vehicle i, the second key point is a feature point on a lower edge line of the two-dimensional bounding box of the vehicle i, and the third key point is a feature point on a right edge line of the two-dimensional bounding box of the vehicle i;
the position determination unit, when calculating the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image according to the relative positions of the three key points on the two-dimensional bounding box of the vehicle i, is specifically configured to,
calculating the two-dimensional coordinates of each of the first keypoint, the second keypoint, and the third keypoint in the coordinate system of the vehicle image according to the first percentage, the second percentage, the third percentage, and the coordinates of the four vertices of the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image.
19. The apparatus according to claim 17, wherein the position determination unit, when calculating the coordinates of the three vertices of the bottom bounding box of the vehicle i in the body coordinate system from the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image,
acquiring an internal reference matrix and an external reference matrix of image acquisition equipment, wherein the image acquisition equipment is equipment for acquiring the vehicle image;
and calculating the coordinates of three vertexes of the bottom outer frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image, the inner reference matrix and the outer reference matrix.
20. The apparatus according to claim 18, wherein the position determination unit, when calculating the coordinates of the three vertices of the bottom bounding box of the vehicle i in the body coordinate system from the two-dimensional coordinates of the three key points on the two-dimensional bounding box of the vehicle i in the coordinate system of the vehicle image,
acquiring an internal reference matrix and an external reference matrix of image acquisition equipment, wherein the image acquisition equipment is equipment for acquiring the vehicle image;
and calculating the coordinates of three vertexes of the bottom outer frame of the vehicle i in the vehicle body coordinate system according to the two-dimensional coordinates of the three key points on the two-dimensional surrounding frame of the vehicle i in the coordinate system of the vehicle image, the inner reference matrix and the outer reference matrix.
21. The device according to one of the claims 16 to 20, characterized in that the parking space determination unit, in particular for,
calculating the coordinates of the fourth vertexes of the respective bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the three vertexes of the respective bottom external frames of the two adjacent vehicles in the vehicle body coordinate system;
calculating coordinates of the central points of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system according to the coordinates of the four vertexes of the bottom external frames of the two adjacent vehicles in the vehicle body coordinate system;
determining the type of the parking space according to an included angle between the long edge direction of the parking space and the connecting line of the central points, wherein the long edge direction of the parking space is determined according to the long edge direction of the bottom external frame of at least one of the two adjacent vehicles, the connecting line of the central points is the connecting line between the central points of the bottom external frames of the two adjacent vehicles, and the type of the parking space comprises a vertical parking space, a parallel parking space or an oblique parking space;
determining the number n of parking spaces according to the parking space type, wherein the number n of the parking spaces is the number of the parking spaces between the two adjacent vehicles;
when n is an integer greater than or equal to 2, determining coordinates of n-1 equally-divided points on the central point connecting line in the vehicle body coordinate system, wherein the n-1 equally-divided points divide the central point connecting line into n equal parts;
and determining the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the vehicle body coordinate system according to the long side direction of the parking spaces and the coordinates of the n-1 equally divided points in the vehicle body coordinate system.
22. The apparatus according to claim 21, wherein the parking space determining unit is specifically configured to determine the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the body coordinate system when determining the coordinates of the free parking spaces corresponding to the n-1 equally divided points in the body coordinate system according to the parking space long side direction and the coordinates of the n-1 equally divided points in the body coordinate system
Aiming at any bisector i in the n-1 bisectors, respectively moving h/2 in the front direction and the back direction corresponding to the long side direction of the parking space from the coordinate of the bisector i in the vehicle body coordinate system, and obtaining the coordinate of the middle point of the two short sides of the idle parking space corresponding to the bisector i in the vehicle body coordinate system; starting from the coordinates of the middle points of the two short sides of the free parking space corresponding to the bisector point i in the vehicle body coordinate system, respectively moving the middle points by w/2 along the positive and negative directions corresponding to the short side directions of the free parking space corresponding to the bisector point i, and obtaining the coordinates of the four vertexes of the free parking space corresponding to the bisector point i in the vehicle body coordinate system; h is the length of a preset parking space, w is the width of the preset parking space, and the short side direction of the idle parking space corresponding to the equal division point i is perpendicular to the long side direction of the parking space;
or,
for any bisector i in the n-1 bisectors, respectively moving the bisector i by w/2 in the front and back directions corresponding to the short side directions of the free parking spaces corresponding to the bisector i from the coordinates of the bisector i in the vehicle body coordinate system, and obtaining the coordinates of the middle points of the two long sides of the free parking spaces corresponding to the bisector i in the vehicle body coordinate system; starting from the coordinates of the middle point of the two long sides of the free parking space corresponding to the equal division point i in the vehicle body coordinate system, respectively moving the middle point by h/2 along the positive and negative directions corresponding to the long side directions of the parking space, and obtaining the coordinates of the four vertexes of the free parking space corresponding to the equal division point i in the vehicle body coordinate system; wherein, h is for predetermineeing parking stall length, and w is for predetermineeing the parking stall width, the minor face direction of the idle parking stall that the equipartition point i corresponds with the long limit direction of parking stall is perpendicular.
23. The apparatus according to claim 21, wherein the parking space determining unit, when determining the type of the parking space according to an included angle between a long side direction of the parking space and a connection line of the center points, is specifically configured to,
when the included angle is 90 degrees, determining that the parking space type is a vertical parking space;
when the included angle is 0 degree, determining that the parking space type is a parallel parking space;
and when the included angle is in the interval of (30 degrees, 60 degrees) or (120 degrees, 150 degrees), determining that the parking space type is an oblique parking space.
24. The apparatus according to claim 22, wherein the parking space determining unit, when determining the type of the parking space according to an included angle between a long side direction of the parking space and the connection line of the center points, is specifically configured to,
when the included angle is 90 degrees, determining that the parking space type is a vertical parking space;
when the included angle is 0 degree, determining that the parking space type is a parallel parking space;
and when the included angle is in the interval of (30 degrees, 60 degrees) or (120 degrees, 150 degrees), determining that the parking space type is an oblique parking space.
25. The apparatus according to claim 21, wherein the parking space determination unit, when determining the number of parking spaces n according to the parking space type, is specifically configured to,
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
26. The apparatus according to claim 22, wherein the parking space determination unit, when determining the number of parking spaces n according to the parking space type, is specifically configured to,
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
27. The apparatus according to claim 23, wherein the parking space determination unit, when determining the number of parking spaces n according to the parking space type, is specifically configured to,
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
28. The apparatus according to claim 24, wherein the parking space determination unit, when determining the number of parking spaces n according to the parking space type, is specifically configured to,
when the parking space type is a vertical parking space, determining n as a maximum integer less than or equal to d/w;
when the parking space type is a parallel parking space, determining that n is the maximum integer less than or equal to d/h;
when the parking space type is an oblique parking space, determining that n is the maximum integer less than or equal to d/(w/sin alpha);
wherein d is the length of the central point connecting line, w is the preset parking space width, h is the preset parking space length, and alpha is the included angle.
29. A parking space determination device, characterized in that the parking space determination device includes: a processor and a memory for storing programmable instructions;
the processor is used for calling the programmable instructions stored in the memory to execute the method for determining the free parking space according to any one of claims 1 to 14.
30. A computer-readable storage medium containing programmable instructions, wherein the computer executes the programmable instructions to make the computer implement the method for determining a vacant parking space according to any one of claims 1 to 14.
CN201711331766.8A 2017-12-13 2017-12-13 Method, device and equipment for determining idle parking space Active CN109918977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711331766.8A CN109918977B (en) 2017-12-13 2017-12-13 Method, device and equipment for determining idle parking space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711331766.8A CN109918977B (en) 2017-12-13 2017-12-13 Method, device and equipment for determining idle parking space

Publications (2)

Publication Number Publication Date
CN109918977A CN109918977A (en) 2019-06-21
CN109918977B true CN109918977B (en) 2021-01-05

Family

ID=66959049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711331766.8A Active CN109918977B (en) 2017-12-13 2017-12-13 Method, device and equipment for determining idle parking space

Country Status (1)

Country Link
CN (1) CN109918977B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11790773B2 (en) * 2019-02-25 2023-10-17 Quantifly Llc Vehicle parking data collection system and method

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276322B (en) * 2019-06-26 2022-01-07 湖北亿咖通科技有限公司 Image processing method and device combined with vehicle machine idle resources
CN112417926B (en) * 2019-08-22 2024-02-27 广州汽车集团股份有限公司 Parking space identification method and device, computer equipment and readable storage medium
CN110852151B (en) * 2019-09-26 2024-02-20 深圳市金溢科技股份有限公司 Method and device for detecting shielding of berths in roads
CN110706509B (en) * 2019-10-12 2021-06-18 东软睿驰汽车技术(沈阳)有限公司 Parking space and direction angle detection method, device, equipment and medium thereof
CN110796063B (en) * 2019-10-24 2022-09-09 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN110969655B (en) * 2019-10-24 2023-08-18 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
US11328607B2 (en) * 2020-03-20 2022-05-10 Aptiv Technologies Limited System and method of detecting vacant parking spots
CN112863234B (en) * 2021-01-05 2022-07-01 广州小鹏自动驾驶科技有限公司 Parking space display method and device, electronic equipment and storage medium
CN113920778A (en) * 2021-12-15 2022-01-11 深圳佑驾创新科技有限公司 Image acquisition method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056968A (en) * 2016-07-29 2016-10-26 北京华航无线电测量研究所 Parking space detection method based on optical image
CN106504580A (en) * 2016-12-07 2017-03-15 深圳市捷顺科技实业股份有限公司 A kind of method for detecting parking stalls and device
CN107364442A (en) * 2017-06-23 2017-11-21 深圳市盛路物联通讯技术有限公司 A kind of automatic stop process and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9070202B2 (en) * 2013-03-14 2015-06-30 Nec Laboratories America, Inc. Moving object localization in 3D using a single camera
EP3468469A4 (en) * 2016-06-13 2020-02-19 Xevo Inc. Method and system for providing behavior of vehicle operator using virtuous cycle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056968A (en) * 2016-07-29 2016-10-26 北京华航无线电测量研究所 Parking space detection method based on optical image
CN106504580A (en) * 2016-12-07 2017-03-15 深圳市捷顺科技实业股份有限公司 A kind of method for detecting parking stalls and device
CN107364442A (en) * 2017-06-23 2017-11-21 深圳市盛路物联通讯技术有限公司 A kind of automatic stop process and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automated Parking Space Detection Using Convolutional Neural Networks;Julien Nyambal et al.;《2017 Pattern Recognition Association of South Africa and Robotics and Mechatronics》;20171201;1-6 *
基于图像多特征识别的空闲车位检测;杨英杰 等;《辽宁大学学报自然科学版》;20150131;第42卷(第1期);45-51 *
基于迷你卷积神经网络的车位检测;安旭骁 等;《计算机应用》;20171127;1-5 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11790773B2 (en) * 2019-02-25 2023-10-17 Quantifly Llc Vehicle parking data collection system and method

Also Published As

Publication number Publication date
CN109918977A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109918977B (en) Method, device and equipment for determining idle parking space
CN112793564B (en) Autonomous parking auxiliary system based on panoramic aerial view and deep learning
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN110568447A (en) Visual positioning method, device and computer readable medium
US10451403B2 (en) Structure-based camera pose estimation system
CN111383279A (en) External parameter calibration method and device and electronic equipment
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN110705433A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN111243003A (en) Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN114119992A (en) Multi-mode three-dimensional target detection method and device based on image and point cloud fusion
CN116386373A (en) Vehicle positioning method and device, storage medium and electronic equipment
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment
CN114140608B (en) Photovoltaic panel marking method and device, electronic equipment and storage medium
CN115982824A (en) Construction site worker space management method and device, electronic equipment and storage medium
CN112633143B (en) Image processing system, method, head-mounted device, processing device, and storage medium
CN114638947A (en) Data labeling method and device, electronic equipment and storage medium
CN117152240A (en) Object detection method, device, equipment and storage medium based on monocular camera
CN114708230A (en) Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN113643358B (en) External parameter calibration method, device, storage medium and system of camera
JP4546155B2 (en) Image processing method, image processing apparatus, and image processing program
CN118397588B (en) Camera scene analysis method, system, equipment and medium for intelligent driving automobile
CN108416305A (en) Position and orientation estimation method, device and the terminal of continuous type lane segmentation object
CN115174662B (en) Supervision data display method and display system based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant