CN110969655B - Method, device, equipment, storage medium and vehicle for detecting parking space - Google Patents

Method, device, equipment, storage medium and vehicle for detecting parking space Download PDF

Info

Publication number
CN110969655B
CN110969655B CN201911019857.7A CN201911019857A CN110969655B CN 110969655 B CN110969655 B CN 110969655B CN 201911019857 A CN201911019857 A CN 201911019857A CN 110969655 B CN110969655 B CN 110969655B
Authority
CN
China
Prior art keywords
parking space
target parking
point
center point
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911019857.7A
Other languages
Chinese (zh)
Other versions
CN110969655A (en
Inventor
潘杰
邓逸安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911019857.7A priority Critical patent/CN110969655B/en
Publication of CN110969655A publication Critical patent/CN110969655A/en
Application granted granted Critical
Publication of CN110969655B publication Critical patent/CN110969655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, equipment, a storage medium and a vehicle for detecting a parking space, and relates to the technical field of autonomous parking. The method includes obtaining an input image presenting one or more parking spaces, wherein the one or more parking spaces include a target parking space to be detected. The method further comprises the steps of detecting a center point of the target parking space and an offset of the center point relative to the corner point of the target parking space in the input image, and then determining the position of the corner point of the target parking space based on the position of the center point and the offset. According to the embodiment of the disclosure, the parking space center point and the offset of the center point and each parking space corner point are detected in the image, so that the detection of the parking space corner points can be more accurately completed by utilizing the structural information of the parking space, and the robustness is good. In addition, some embodiments of the disclosure can complete detection of the parking space through the neural network model, so that the detection speed is improved, and the detection time is saved.

Description

Method, device, equipment, storage medium and vehicle for detecting parking space
Technical Field
Embodiments of the present disclosure relate generally to the field of autopilot, and more particularly to the field of autonomous parking technology.
Background
Autopilot, also known as unmanned, is a technology that implements unmanned vehicles through a computer system. The autonomous vehicle relies on cooperation of artificial intelligence, visual computing, radar systems, monitoring devices, satellite positioning systems, etc., to enable a computer to operate the vehicle automatically and safely without human manipulation. Autopilot can be divided into the following phases according to the level of automation: assisted driving, semi-automatic driving, highly automated driving, and fully automated driving.
Autonomous parking is an important function in automatic driving, which means that a vehicle is automatically parked into a space without manual operation or control. In an autonomous parking scene, an autonomous vehicle needs to complete a series of processes such as automatic cruising, empty space searching, reversing and warehousing, and the like, so that the whole parking process is completed in a parking lot autonomously. The empty space searching and reversing and warehousing processes need to acquire key visual information relied on by the decision control module by means of a perception technology. The accuracy of the sensing detection results directly influences the effect of vehicle parking. If deviation is perceived, the vehicle cannot be accurately parked in the middle position of the parking space, and if other parked vehicles exist on two sides of the parking space, a vehicle collision accident can also happen.
Disclosure of Invention
According to example embodiments of the present disclosure, a method, apparatus, device, storage medium, and vehicle for detecting a parking space are provided.
In a first aspect of the present disclosure, a method for detecting a parking spot is provided. The method comprises the following steps: obtaining an input image representing one or more parking spaces, wherein the one or more parking spaces comprise target parking spaces to be detected; detecting a center point of the target parking space and an offset of the center point relative to a corner point of the target parking space based on the input image; and determining the position of the corner point of the target parking space based on the position of the center point and the offset.
In a second aspect of the present disclosure, an apparatus for detecting a parking space is provided. The device comprises: the system comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is configured to acquire an input image presenting one or more parking spaces, wherein the one or more parking spaces comprise a target parking space to be detected; the central point detection module is configured to detect a central point of the target parking space and offset of the central point relative to corner points of the target parking space based on the input image; and the corner point determining module is configured to determine the position of the corner point of the target parking space based on the position and the offset of the center point.
In a third aspect of the present disclosure, an electronic device is provided that includes one or more processors and a storage device for storing one or more programs. The one or more programs, when executed by the one or more processors, cause the electronic device to implement methods or processes in accordance with embodiments of the present disclosure.
In a fourth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which when executed by a processor implements a method or process according to an embodiment of the present disclosure.
In a fifth aspect of the present disclosure, a vehicle is provided that includes an electronic device according to an embodiment of the present disclosure.
It should be understood that what is described in this summary is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
1A-1B illustrate an example environment of an autonomous parking scenario of an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method for detecting a parking spot according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a process for detecting corner points of a parking space in an image, according to an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of an architecture of an example autonomous parking system, according to an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart of a method for detecting an empty parking spot according to an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of an architecture of a neural network model, according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of an apparatus for detecting a parking spot according to an embodiment of the present disclosure;
FIG. 8 illustrates a flow chart of another method for detecting a parking spot in accordance with an embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of another process for detecting corner points of a parking space in an image, in accordance with an embodiment of the present disclosure;
fig. 10 shows a schematic diagram for correcting corner points using a car park line according to an embodiment of the disclosure;
FIG. 11 illustrates a schematic diagram of an architecture of another neural network model, according to an embodiment of the present disclosure;
FIG. 12 illustrates a block diagram of another apparatus for detecting a parking spot in accordance with an embodiment of the present disclosure; and
Fig. 13 illustrates a block diagram of an electronic device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". Other explicit and implicit definitions are also possible below. The term "parking space" means a parking space in which a vehicle can be parked, which is typically characterized by lines of various colors.
In order to realize autonomous parking, the unmanned vehicle needs to accurately position the parking space first, so that the detection of the surrounding parking space is needed. The conventional parking space detection method directly detects four corner points of each parking space by means of image recognition and the like, and then groups the corner points by rectangular frame assistance (for example, a rectangular frame containing a parking space, which may cover the area of an adjacent parking space) or clustering algorithm (for example, dividing the four corner points into the same class). However, the detection results of these conventional detection methods are not accurate enough (for example, some corner points may be missed), and the corner point grouping is not stable enough due to the instability of the clustering algorithm, for example, four corner points of the same parking space may not be divided into the same group, which results in incomplete overall detection of the parking space. In some situations of autonomous parking, it may be required that after the autonomous parking is completed, the difference between the widths of the remaining gaps at two sides of the parking space does not exceed a certain distance, so that the parking space sensing information needs to be accurately and reliably output. Therefore, the traditional parking space detection method cannot realize accurate and stable parking space detection and cannot meet the requirement of high-precision autonomous parking. In addition, the traditional post-processing modes such as clustering and the like usually take a long time, and the speed of parking space detection is greatly influenced.
The embodiment of the disclosure provides a new scheme for detecting parking spaces, which relates to visual perception fusion of parking spaces in an autonomous parking scene. According to the embodiment of the disclosure, the structural information of the parking space can be utilized to more accurately finish the detection of the parking space corner points by detecting the parking space center points in the image and the offset of the center points and the parking space corner points, and the robustness is good, so that visual information is better provided for autonomous parking. The accurate detection of the parking space is a foundation for realizing autonomous parking, and the safety of autonomous parking can be ensured. In addition, some embodiments of the present disclosure can accomplish parking stall detection through neural network model, can improve parking stall detection speed, save parking stall detection time consuming. Some example embodiments of the present disclosure are described in detail below with reference to fig. 1-13.
The parking space corner point (called corner point for short) is an important feature of a parking space, and can represent the position of the parking space. In a general case, the side lines of the parking space are two groups of parallel straight lines, if the positions of four corner points are obtained, the complete position information of the parking space is obtained substantially, because the connecting lines of the corner points of the parking space can represent the parking space lines. Therefore, the corner points of the parking space are inherent attributes of the parking space, and are suitable for most parking space scenes. The types of parking spaces generally include, but are not limited to, non-zig-zag parking spaces, diagonal parking spaces, straight parking spaces, and the like.
FIG. 1A illustrates an example environment 100 for finding an empty parking spot in an autonomous parking scenario in accordance with an embodiment of the present disclosure. As shown in fig. 1A, in an environment 100, a vehicle 110 is traveling on a road 120 within a parking lot, which may be an outdoor parking lot or an indoor parking lot (e.g., an underground parking lot, etc.). In some embodiments, vehicle 110 may be a vehicle equipped with certain autopilot capabilities (e.g., autonomous parking capabilities), where autopilot capabilities may include, but are not limited to, assisted drive capabilities, semi-autopilot capabilities, altitude autopilot capabilities, or full autopilot capabilities. In the course of the form of the vehicle 110, in order to find an empty space in order to park the vehicle 110, the vehicle 110 may capture a real-time image of the surrounding environment through an image capturing device 115 fixed or mounted thereon, and detect an empty space condition in the external environment through technologies such as image processing. In some embodiments, image capture device 115 may be a camera with one or more wide angles or ultra-wide angles capable of capturing a scene within 360 degrees of the surrounding environment. Alternatively, the image pickup device 115 may also employ a rotatable structure so as to rotatably detect real-time scenes of a plurality of directions outside the vehicle.
With continued reference to fig. 1A, vehicle 110 is now in the find empty phase in the autonomous parking scenario, and vehicle 110 automatically cruises to find an empty vehicle that can be parked. In the example environment 100 of fig. 1A, the right side of the vehicle 110 is a wall 140 and the left side is a parking area 130 that includes a plurality of parking spaces, each of which has a number, such as numbers a001, a002, a003, a004, for example, printed thereon. The vehicle 131 is currently parked in the parking space a001, the vehicle 132 is currently parked in the parking space a003, and the vehicles are not parked in the parking spaces a002 and a004, namely, empty vehicles.
In the parking space searching stage, the vehicle 110 may detect all the parking space conditions in the captured image, or may detect only the empty parking space condition. When the vehicle 110 cruises and passes the parking space a002, the image acquired by the image acquisition device 115 can detect that the parking space a002 is an empty parking space and a specific position of the parking space a002, and in some embodiments of the present disclosure, the parking space position can be represented by four inner corner points of the parking space, for example, the connecting lines of the corner points 121, 122, 123, 124 are positions of the parking space a002. In some embodiments, vehicle 110 may only detect the parking spot being opposite by the camera, in scenario 100 of fig. 1A, parking spot a002. In other embodiments, vehicle 110 may also detect the location of all spaces within a predetermined distance or the location of all empty spaces, e.g., the locations of space A002 and space A004 can be detected simultaneously.
After vehicle 110 detects that parking space a002 is an empty space and a specific location of parking space a002, vehicle 110 may switch from the empty space seek phase to the enter phase, fig. 1B illustrates an example environment 150 of reversing into the enter in an autonomous parking scenario in accordance with an embodiment of the present disclosure. The control program of the vehicle 110 may control the vehicle 110 to automatically park in the parking space a002 according to the detected position of the parking space a002, as indicated by arrow 155, showing the track of the vehicle 110 parked in the parking space a 002. In addition, because more accurate parking space positions are needed in the warehouse-in stage, the embodiment of the disclosure can further detect or re-detect the specific position of the parking space A002 in the warehouse-in stage. That is, the positions of the corner points 121, 122, 123, 124 may be detected again in the scene 150 of fig. 1B. Because the embodiment of the disclosure can more accurately and stably detect the position of the parking space, the vehicle 110 can accurately and efficiently park into the parking space, powerful guarantee is provided for automatic driving, and meanwhile, the safety of autonomous parking is ensured.
It should be appreciated that vehicle 110 may include other sensors or detection devices for autopilot in addition to image capture device 115, for example, vehicle 110 may also include lidar, satellite positioning systems, inertial measurement devices, and the like. Lidar is a radar device that detects the position and/or speed of a target by emitting a laser beam, and operates on the principle that a detection signal (laser beam) is emitted to the target, and then a received signal (target echo) reflected from the target is compared with the emission signal, and after appropriate processing, relevant information of the target, such as one or more of parameters such as a target distance, a target azimuth, a target altitude, a target speed, a target attitude, and even a target shape, can be obtained. The satellite positioning system is also not limited to the Global Positioning System (GPS), the european galileo satellite positioning system, the chinese beidou satellite positioning system, etc., and may be used in conjunction with embodiments of the present disclosure. Moreover, it should be understood that the environments 100 and 150 illustrated in FIGS. 1A and 1B are merely example environments of embodiments of the present disclosure and are not intended to limit the scope of the present disclosure.
Fig. 2 illustrates a flow chart of a method 200 for detecting a parking spot according to an embodiment of the present disclosure. For clarity of illustration, the method 200 of embodiments of the present disclosure is described below with reference to the environment 100 of fig. 1A. It should be appreciated that method 200 may be implemented at vehicle 110, in a remote server or cloud, or partially locally at vehicle 110 and partially in a remote server.
At block 202, an input image is obtained that presents one or more parking spaces, wherein the one or more parking spaces include a target parking space to be detected. For example, referring to the example environment 100 of fig. 1A, during a empty space finding phase of automatic parking, the vehicle 110 may capture images of the surrounding environment in real-time through the image capture device 115, and then detect the location of one or more of the spaces in the captured images. In some embodiments, vehicle 110 may only detect the position of an empty space. In some embodiments, a target spot to be detected in one or more spots may be determined based on a spatial relationship between the vehicle and the one or more spots in the captured image. For example, vehicle 110 may only detect the position of the parking space (e.g., parking space a 002) that the camera is facing, making parking space detection more targeted. Alternatively, the vehicle 110 may perform synchronous detection of the parking places on both sides of the road, or detect the positions of parking places or empty parking places within a certain distance, or detect the positions of all parking places and empty parking places in the captured image.
At block 204, a center point of the target parking space and an offset of the center point relative to a corner point of the target parking space are detected in the input image, wherein the center point may be an intersection of a diagonal line of the parking space. For example, embodiments of the present disclosure may detect a center point and offsets of the center point from four corner points through a neural network model that may be trained through training images of corner points that have been labeled with respective parking spaces. Of course, other machine learning models may also be used in connection with embodiments of the present disclosure.
The method of the embodiment of the disclosure does not directly detect the corner points of each parking space, but utilizes the structural information of the parking space, firstly detects the center point of the parking space and the offset (such as the horizontal axis offset and the vertical axis offset in an image coordinate system) of each corner point in a binding mode, and then determines the position of each corner point according to the offset, so that four corner points of each parking space can be output at the same time, the condition of missing one or more corner points of the parking space can not occur, and the detection precision is higher. In certain embodiments of the present disclosure, empty space determination may be made for a space, i.e., only the space location of an empty space is detected, and not the location of a non-empty space.
At block 206, the location of the corner point of the target parking space is determined based on the location of the center point and the offset. For example, after the vehicle 110 determines the center point of the parking space a002 and the offsets of the center point with respect to the four corner points of the parking space a002, the positions of the four corner points may be calculated based on the center point positions and the offsets from the respective corner points. In this way, the coordinates of the corner points 121, 122, 123, 124 of the parking space a002 in the input image can be obtained, and then the positions of the corner points 121, 122, 123, 124 can be converted into world coordinates, such as vehicle coordinates, through coordinate conversion, so that the vehicle 110 can learn the specific position of the detected parking space a 002.
Therefore, the embodiment of the disclosure can utilize the structural information of the parking space to more accurately finish the detection of the parking space corner points by detecting the parking space center point in the image and the offset of the center point and each parking space corner point, and has better robustness. In addition, some embodiments of the present disclosure can accomplish parking space detection through neural network models, can improve parking space detection speed, save detection time consuming. Because the embodiment of the disclosure detects the parking space center point and the four corner points in a binding manner, the method of the embodiment of the disclosure does not cause the condition of missing the parking space corner points.
Fig. 3 shows a schematic diagram of a process 300 for detecting corner points of a parking space in an image, fig. 3 gives one specific example of detection using the method 200 of fig. 2, according to an embodiment of the present disclosure. As shown in fig. 3, after the image 310 photographed in real time by the vehicle is obtained, center point detection and corner point offset detection, that is, offset amounts of the center point with respect to the respective corner points, are performed on the image 310 at block 320. For example, embodiments of the present disclosure may input image 310 into a pre-trained neural network model, and then determine a parking spot center point 335 in the image and an offset of center point 335 from the corner point through processing of the neural network model, as shown in image 330. Next, based on the detected parking space center point 335 and the offset of the center point 335 from the corner points, parking space corner point detection and calculation is performed at block 340, thereby determining four corner points 351, 352, 353, and 354 of the target parking space, as shown in image 350.
As shown in fig. 3, unlike the conventional method that directly detects four corner points of a parking space from an image 310, the embodiment of the present disclosure determines the corner point positions by detecting the center point and the corner point offset (e.g., image 330) of the parking space, and can more accurately detect the positions of the four corner points by using the structural information of the parking space. In addition, because the center point and the corner point offset are detected together in a binding mode, the situation that one or more corner points of the parking space are missed can not occur in the embodiment of the disclosure, and therefore the robustness is high. In addition, because the four corner points determined by each parking space are already grouped together through the center point, no additional rectangular frame assistance or clustering mode is needed for grouping the corner points.
With continued reference to FIG. 3, optionally after the central spot 335 is detected, an area around the central spot may be selected as the central area 365 by way of Gaussian smoothing or the like, as shown in image 360, and then a determination is made at block 370 as to whether the detected spot is an empty spot based on the image characteristics of the central area 365, which spot is typically an empty spot if the central area indicates an empty spot. In some embodiments, the neural network model may determine the probability that the target space is empty based on the central region 365, and then determine whether it is empty by comparison to a priori probability threshold. Therefore, some embodiments of the present disclosure may add an empty parking space or a non-empty parking space classification task to the neural network model, so that the neural network model may determine whether the parking space is empty or not, i.e. whether the parking space is parkable or not, based on detecting the parking space corner, thereby improving the efficiency of parking space detection.
The conventional method is generally based on a color mean square error threshold judgment method to judge the parking space occupation condition, calculates gray values of a background image and an actual measurement image, and judges the parking space occupation condition through comparison of a mean square error and a threshold. However, the conventional method is easily affected by different ambient light and the color depth of the parking space, and has low robustness, and in addition, the method also does not calculate the specific position of the parking space at the same time. In contrast, the embodiment of the present disclosure determines the occupation situation of the parking space by determining the central area of the parking space through the neural network model, and can output the parking space central point (which can be used to determine the parking space angular point, and then determine the specific position of the parking space) at the same time, so the embodiment of the present disclosure can promote the precision and efficiency of the parking space detection.
Fig. 4 shows a schematic diagram of an architecture of an example autonomous parking system 400, the autonomous parking system 400 being a system capable of autonomous parking of a vehicle, in accordance with an embodiment of the present disclosure. Different from simple automatic parking, the vehicle carrying the automatic parking can realize the main functions of remote calling, automatic queuing, automatic parking stall finding, automatic parking and the like, thereby greatly improving the vehicle and travel experience of passengers. In general, the autopilot level may include the following six levels: l0, no automatic configuration, namely the driver drives the vehicle by himself, and no active safety configuration exists; l1, driving assistance, the vehicle has a certain function to assist the driver to perform a specific task of transverse or longitudinal vehicle movement (but not to simultaneously complete a complex task of a parallel overtaking), and the driver still bears most of the vehicle control capability; l2, advanced driving assistance, the vehicle being able to assist the driver in performing vehicle movement tasks including lateral and longitudinal (the vehicle being able to autonomously perform certain complex tasks), but the driver being required to monitor the vehicle in real time to accomplish these tasks; l3, automatically driving in a specific scene, wherein when the vehicle runs dynamically, the automatic driving system can completely intervene in the running of the vehicle under the agreement of a user, and the user can correct errors of the vehicle during the running of the automatic driving at any time; l4, advanced automatic driving, wherein all operations are realized through an automatic driving system when the vehicle runs, and in an execution scene, the vehicle has no out-of-logic expression, and no operation intervention of a user is required; and L5, whether in a specific execution scene or not, the vehicle can reach the destination through automatic driving without user operation. The autonomous parking function generally requires support of L4 level autopilot, which is the entrance of autopilot technology into L4 level ground.
As shown in fig. 4, the autonomous parking system 400 includes an image capture device 410, a machine learning model 420, an autonomous parking control system 430, and an execution module 440. The image capturing device 410 may be configured to capture an image of the surrounding environment to identify a parking space or an empty space and a parking space position, which may be one or more cameras. An example of the machine learning model 420 may be a neural network model, such as a convolutional neural network model, the machine learning model 420 being trained by training data 425, which is capable of determining the position of a space or empty space therein for an input image acquired by the image acquisition device 410, and then transmitting the empty space position to the autonomous parking control system 430, and the autonomous parking control system 430 may control the vehicle to autonomously park according to the empty space position, and specific vehicle operation steps, such as controlling steering, throttle, braking, etc., of the vehicle are performed by the execution module 440.
Machine learning model 420 is a machine model implemented using machine learning techniques, which refers to a machine learning model that enables a machine to learn rules from a large amount of data like a human, thereby generating a machine learning model that can accomplish some specific tasks. Artificial neural networks are a typical machine learning technique that models the human brain to create an artificial neural network and allow computers to learn from large amounts of data by using various machine learning algorithms. Common artificial neural networks include Convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), and the like.
In some embodiments, autonomous parking system 400 may be deployed at a vehicle to implement autonomous parking functions of the vehicle. Further, one or more components in autonomous parking system 400 may also be deployed at other locations, for example, training data 425 may be deployed at a server, while machine learning model 420 is deployed at the vehicle after training of machine learning model 420 is completed.
Fig. 5 illustrates a schematic diagram of a method 500 of detecting an empty parking space, showing a switch from an empty parking space seek phase to a warehouse entry phase of autonomous parking, in accordance with an embodiment of the present disclosure. In block 502, during the empty space finding phase of autonomous parking, empty spaces in the captured image are detected, i.e., empty spaces around the vehicle are detected in real time. At block 504, a determination is made as to whether an empty space is detected. If it is determined at block 504 that an empty space is not detected, then the process returns to step 502 to continue cruising and detecting an empty space. If, however, it is determined at block 504 that an empty space is detected, at block 506, the center point position and corner point offset of the empty space are detected by the method of embodiments of the present disclosure, and then the corner point position of the empty space is determined. Of course, the corner points of the parking spaces can be synchronously detected while the empty parking spaces are detected. At block 508, the corner locations of the detected empty parking spaces are converted from image coordinates to world coordinates. For example, the image coordinates are converted into three-dimensional coordinates by back projection transformation, and the position information of the empty parking space is output to a downstream module. At block 510, the vehicle is controlled to enter a garage phase of autonomous parking, and an automated parking garage process is initiated. In some embodiments, the probability of empty of the detected space may be determined to determine if the space is empty. In addition, after entering the warehouse-in stage of autonomous parking, the position of the empty parking space can be further and more accurately detected for the backward warehouse-in process of the later entry.
Fig. 6 shows a schematic diagram of an architecture of a neural network model 600 according to an embodiment of the present disclosure, where the neural network model 600 may be, for example, a CNN model, which is a feed-forward neural network with a depth structure including convolution calculations, and has a very wide application in the field of computer vision, particularly image processing. From the perspective of a computer, an image is actually a two-dimensional or three-dimensional matrix, and the CNN performs the operations of convolution, pooling and the like to extract features from a two-dimensional or three-dimensional array and identify the image. CNNs are typically composed of an input layer, a convolution layer, an activation function, a pooling layer, a fully connected layer. It should be appreciated that while CNN is used as one example of a machine learning model in some embodiments of the present disclosure, other machine learning models may also be used in conjunction with embodiments of the present disclosure to achieve parking spot detection.
Referring to fig. 6, neural network model 600 may include input layer 610 (which may be a 672 x 320 size image), convolutional layer 620, pooling layer 630, convolutional layer 640, pooling layer 650, full-connectivity layer 660, and output layer 670 (which may be a 84 x 40 x 9 size feature map). It should be appreciated that the neural network model 600 may also include more convolutional layers and pooling layers.
The convolution layers (e.g., convolution layers 620 and 640) are made up of a number of convolution units, each of which parameters are optimized by a back-propagation algorithm, and the input image is down-summed and feature extracted by a convolution operation. The purpose of convolution operations is to extract different features of the input, and a first layer of convolution may only extract some low-level features, such as edges, lines, and corners, from which a network of more layers can iteratively extract more complex features. The pooling layers (e.g., pooling layers 630 and 650) are another component of the CNN, downsampling its previous layer. The method has the effects of reducing the size (length, width and channel number) of the previous layer, thereby reducing the calculated amount, the storage usage amount and the parameter number, and further achieving the purposes of a certain scale and space invariance and reducing the possibility of overfitting. The full connectivity layer acts as a classifier throughout the CNN. If the operations of the convolution layer, the pooling layer, the activation function layer, and the like are to map the original data to the hidden layer feature space, the fully connected layer functions to map the learned "distributed feature representation" to the sample mark space.
In some embodiments, the neural network model 600 may be trained by a large number of training images, each of which has been manually or otherwise labeled with the locations of the four corner points of each parking space, such as the locations of the corner points within the parking space. For the training image, the parking space center point and the offset of the center point relative to the corner point can be determined through the corner point position, and then the neural network model 600 is trained by using the training image and the parking space center point and the corner point offset.
After training of the neural network model 600 is completed, the input image may be rolled and downsampled by the neural network model 600 to obtain an output image, and then a set of attributes for each pixel in the output image is determined using the neural network model 600, wherein one example of the set of attributes is an entry 671 in the feature map. As shown in fig. 6, the entry 671 includes the empty space probability P, the position of the center point (X, Y), and the offset amounts (X1, Y1, X2, Y2, X3, Y3, X4, Y4) of the center point with respect to the four corner points. In some embodiments, if only the position of one parking space in the image is detected, one pixel point with the highest probability of the hollow parking space in the output image can be directly determined as the center point of the target parking space. For example, the neural network model can output an output image 350 marked with an empty parking spot corner based on the input image 310, and thus can determine the position of the empty parking spot. Alternatively, multiple parking spaces in the input image may be detected simultaneously, one or more pixels in the output image may be determined where the probability of a parking space in the output image is greater than the probability threshold, then the one or more pixels are respectively determined as one or more center points of one or more target parking spaces, and then the position of each parking space is determined according to the set of attributes (e.g., the entry 671).
Fig. 7 shows a block diagram of an apparatus 700 for detecting a parking spot according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus 700 includes an image obtaining module 710, a center point detecting module 720, and a corner point determining module 730. The image acquisition module 710 is configured to acquire an input image presenting one or more parking spaces, wherein the one or more parking spaces include a target parking space to be detected. The center point detection module 720 is configured to detect a center point of the target parking space and an offset of the center point with respect to a corner point of the target parking space in the input image. The corner determining module 730 is configured to determine a location of a corner of the target parking space based on the location of the center point and the offset.
In some embodiments, wherein the center point detection module 720 may include: the central area determining module is configured to determine a central area of the target parking space based on the central point; and an empty space probability determination module configured to determine a probability that the target space is an empty space based on the central region.
In some embodiments, the apparatus 700 may further comprise: the probability judging module is configured to determine whether the probability of the target parking space being the empty parking space is larger than a preset threshold value; the coordinate conversion module is configured to convert the position of the corner point of the target parking space from the image coordinate to the world coordinate according to the fact that the determined probability is larger than a preset threshold value; and the stage switching module is configured to output world coordinates of corner points of the target parking space so that the vehicle is switched from a vacant parking space searching stage to a warehousing stage of autonomous parking.
In some embodiments, wherein the image acquisition module 710 may include: an image capturing module configured to obtain an input image through an image capturing device of a vehicle; and a target parking space determination module configured to determine a target parking space to be detected among the one or more parking spaces based on a spatial relationship between the vehicle and the one or more parking spaces.
In some embodiments, the central point detection module 720 may be included in a neural network model, and the apparatus 700 may further include: the training data acquisition module is configured to acquire training images marked with four corner points of each parking space; and a model training module configured to train the neural network model using the training image.
In some embodiments, wherein the center point detection module 720 may include: an output image obtaining module configured to volume and downsample an input image using the neural network model to obtain an output image; and an attribute set determination module configured to determine an attribute set of each pixel point in the output image using the neural network model, the attribute set including an empty space probability, a position of a center point, and an offset of the center point relative to the four corner points.
In some embodiments, wherein the center point detection module 720 may further include: the first determining module is configured to determine one pixel point with the largest probability of the hollow parking space in the output image as a center point of the target parking space.
In some embodiments, wherein the center point detection module 720 may further include: the second determining module is configured to determine one or more pixel points, of which the probability of the empty parking space in the output image is greater than a probability threshold; and a third determining module configured to determine the one or more pixel points as one or more center points of the one or more target parking spaces, respectively.
It should be appreciated that the image acquisition module 710, the center point detection module 720, and the corner point determination module 730 shown in fig. 7 may be included in one or more electronic devices, all or a portion of which may be further included in a vehicle. Moreover, it should be understood that the modules illustrated in fig. 7 may perform steps or actions in a method or process with reference to embodiments of the present disclosure.
According to some embodiments of the present disclosure, after a target parking space and its parking space position are detected in a space-free parking space searching stage of autonomous parking (the parking space position may be represented by four corner points of the parking space), a parking stage of autonomous parking, i.e., parking of the parking space into the target parking space may be controlled. Because of the higher positional accuracy requirements for the parking spot during the warehouse entry phase, embodiments of the present disclosure also provide another method 800 for more accurately detecting the parking spot.
Fig. 8 illustrates a flow chart of another method 800 for detecting a parking spot according to an embodiment of the present disclosure. For clarity of illustration, method 800 of embodiments of the present disclosure is described below with reference to environment 150 of fig. 1B. It should be appreciated that the space detection method 800 may be implemented at the vehicle 110, in a remote server or cloud, or partially local to the vehicle 110 and partially in a remote server.
At block 802, an input image presenting a target parking space to be detected is obtained. For example, referring to environment 150 of fig. 1B, since vehicle 110 has learned that parking space a002 is a vacant space during the vacant space finding phase of automatic parking, and initially determines the position of vacant space a002, wherein the position of the parking space is characterized by corner points 121, 122, 123, 124, vehicle 110 enters the warehousing phase of automatic parking. The vehicle 110 may capture an image of the empty space a002 in real time through the image capturing device 115 and then re-detect the exact position of the empty space a002 in the captured image. In some embodiments, the vehicle 110 may detect the position of the empty parking space a002 multiple times during parking and warehousing, and constantly make adjustments and corrections.
At block 804, corner points and space lines of the target space are detected in the input image. For example, embodiments of the present disclosure may detect a set of four corner points and two space lines of a target space in an image simultaneously through a neural network model that may be trained through a large number of training images already labeled with corner points and space lines of each space. Of course, other machine learning models may also be used in connection with embodiments of the present disclosure. In some embodiments, the parking space line may be generated by straight line fitting of a plurality of points, for example, a plurality of points on two long sides of the target parking space may be detected, and then the two long sides of the parking space may be respectively fitted by straight lines.
Since the image acquired by the camera may be deformed, the detected corner may shake, resulting in inaccurate search results, and thus the accuracy of the separate corner detection method may not be high enough. Generally, a parking space is generally rectangular or parallelogram, a parking space line of the parking space is generally a straight line, and if the number of points detected on the parking space line is enough, the straight line generated by means of straight line fitting and the like is relatively stable, so that the method has strong fault tolerance to detection disturbance of individual points. Therefore, the embodiment of the disclosure detects the parking space corner point and the parking space line simultaneously, and corrects the position of the corner point by using the relatively stable parking space line.
At block 806, the location of the detected corner point is modified based on the detected carport line. For example, after the vehicle 110 detects the corner point and the set of spot line points of the spot a002, the corner point and the set of spot line points may be converted from image coordinates to world coordinates, and the set of points fitted to a straight line in the world coordinate system. Then, the position of the corner point can be corrected by using the straight line of the parking space line in the world coordinate system, so that the more accurate position of the corner point is obtained. Next, the corrected positions of the corner points 121, 122, 123, 124 may be transmitted to the autonomous parking control system, so that the vehicle 110 performs a reverse parking process according to the corner point position of the parking space a 002. In some embodiments, correction of the corner points of the parking space lines may be to adjust the corner points to the straight lines of the corresponding parking space lines, so as to improve the accuracy of the transverse position of the parking space.
Therefore, according to the embodiment of the disclosure, the corner points and the parking space lines of the target parking space in the image are detected simultaneously, the corner points of the parking space are restrained by utilizing the relatively stable parking space lines, the precision of corner point detection can be improved, the shake of the corner points is reduced, and vehicles can be parked at more middle positions of the parking space when entering the warehouse. In addition, according to some embodiments of the present disclosure, the processing speed of parking space detection can be improved by the method of the neural network model.
If the space lines are used alone to detect the space, the longitudinal accuracy may be insufficient and fitting errors of the space lines may also occur. In addition, the parking space line at the entrance of some parking spaces may be incomplete, possibly including its parking space number, which may affect the accuracy of the parking space line detection. Compared with the single use of the parking space line, the method 800 of the embodiment of the disclosure considers the parking space line and the corner point simultaneously, and uses the parking space line to further improve the precision of the corner point, so that the precision of parking space detection can be improved.
Fig. 9 shows a schematic diagram of another process 900 for detecting corner points of a parking space in an image according to an embodiment of the disclosure. As shown in fig. 9, in a warehouse entry stage of automatic parking, after an image 910 taken in real time of a vehicle is obtained, a parking space line and a parking space corner point of a target parking space are simultaneously detected through the image 910 at block 920. For example, embodiments of the present disclosure may input image 910 into a pre-trained CNN model. As shown in image 930, embodiments of the present disclosure are capable of detecting four corner points 931, 932, 933, 934 and two long stall lines 935 and 936 of a target stall simultaneously. In some embodiments, two long spot lines 935 and 936 may be obtained directly through a neural network model. Alternatively, multiple sets of points may be obtained through a neural network model, and then two long carport lines 935 and 936 are generated by linear fitting of the sets of points in the world coordinate system. Furthermore, although two long stall lines of the target stall are detected in the example of fig. 9, all stall lines of the stall, that is, four stall lines, may be detected. Referring to fig. 9, in image 930, it is shown that the detected position of corner point 933 is not accurate enough, which is not at the actual car park line corner.
With continued reference to fig. 9, after the target stall lines and corner points are detected, four corner points 931, 932, 933, 934 are corrected in the world coordinate system using stall lines 935 and 936, resulting in an image 950, at block 940. As shown in image 950, the position of corner point 933 is corrected to parking space line 935. In this way, more accurate parking space corner points 931, 932, 933, 934 can be obtained. After the parking space corner points 931, 932, 933, 934 are obtained, a quadrangle formed by connecting the parking space corner points 931, 932, 933, 934 may be regarded as the position of the inner edge line of the detected target parking space. Because the accuracy of parking space detection can directly influence the effect of autonomous parking, the parking space detection method according to the embodiment of the disclosure can improve the parking effect of autonomous parking, so that the vehicle can be parked at the middle position of the parking space as much as possible, the safety is ensured, the passenger experience is improved, and the passengers can walk out of the vehicle conveniently.
Fig. 10 shows a schematic illustration 1000 for correcting corner points using a car park line, according to an embodiment of the disclosure. In the first stage, 2 sets of points (e.g., set of points 1011) and four corners (e.g., corner points 1012) of the target parking space are detected by the neural network model, as indicated by arrow 1010. In some embodiments, a set of points on an inner border of a long spot line of a target spot may be detected, where the target spot generally includes two long spot lines and two short spot lines. Because the vehicle camera shoots from side view, the parking space in the image shows the property of near size and far size, and the detection of the corner points at the far positions of the parking space can be inaccurate, the position of the corner points is necessary to be corrected.
Next, the detected parking space line point sets and parking space corner points are converted from image coordinates to world coordinates, and in the second stage, straight line fitting can be performed on each detected point set in the world coordinate system, as indicated by arrow 1020, and point set 1011 is fitted into straight line 1021. The straight line fitting means that a straight line is found to pass through all points as far as possible, the straight line fitting is the simplest way in the unitary function fitting, and as long as the straight line passes through all points, the situation that the straight line cannot pass through all points accurately is possible, the principle of determining the straight line is various, the least square method is one of the straight lines, errors (the distance between the points and the straight line) are generated when the straight line cannot pass through the points, and the principle of the least square method is that the square sum of the errors of all the points is minimum. The least squares method is a mathematical optimization technique that finds the best functional match of the data by minimizing the sum of squares of the errors. The least square method can be used for simply obtaining unknown data, and the square sum of errors between the obtained data and actual data is minimized, so that the method is a common mode of straight line fitting.
After fitting the straight line of the space line, in a third stage, the diagonal points (e.g. corner points 1012) may be corrected in the world coordinate system by the straight line of the space line (e.g. straight line 1021), as indicated by arrow 1030. In some embodiments, each detected corner point may be projected onto a closer straight line of the two generated straight lines in the world coordinate system, a projection point of each corner point on the closer straight line is determined as a new corner point, and the position of the target parking space is determined based on the new corner point. For example, the corner points 1012 are projected onto the closer straight line 1021 to form the projection points 1032, wherein the projection points 1032 can be used as new corner points to replace the original corner points 1012, and after correction of all the corner points is completed, the accurate position of the parking space is determined again based on the corrected new corner points.
Fig. 11 shows a schematic diagram of the architecture of another neural network model 1100, which may be a convolutional neural network model, which may include one or more convolution processes and pooling processes, in accordance with embodiments of the present disclosure. The neural network model 1100 shown in fig. 11 is different from the neural network model 600 shown in fig. 6 in that the neural network model 1100 outputs a feature map of a space line in addition to a feature map of a corner point. Furthermore, while the neural network model 1100 of fig. 11 shows the corner detection by the corner center points and the corner offsets, it may also detect the corner points of the parking space in the image by other existing or future developed corner detection methods.
As shown in fig. 11, the neural network model 1100 includes an input layer 1110 (which may be 672 x 320 size images), a convolution layer 1120, a pooling layer 1130, a convolution layer 1140, a pooling layer 1150, a full connection layer 1160, and an output layer (which may include 84 x 40 x 9 size features 1170 and 84 x 40 x 6 features 1180).
In some embodiments, the neural network model 1100 may be trained by a large number of training images, where the training images may include two types, one being a first type of training image labeled with four corner points for each parking stall and the other being a second type of training image labeled with two long parking stall lines for each parking stall, where the first type of image and the second type of image may include different or the same original images. The neural network model 1100 is then jointly trained using a plurality of training images of the first type and a plurality of training images of the second type. For example, in a batch training process, when ten training images are batch trained, five first type training images and five second type training images may be trained. Training of the neural network model 1100 is accomplished through iterative training of a large amount of training data.
After training of the neural network model 1100 is completed, the input image may be rolled and downsampled by the neural network model 1100 to obtain an output image, and then a set of attributes for each pixel in the output image is determined using the neural network model 1100, one example of which includes a feature map 1170 (i.e., a first set of attributes for the pixel) and a feature map 1180 (i.e., a second set of attributes for the pixel).
One example of a feature map 1170 is an entry 1171, the entry 1171 including a space probability P, a location of a center point (X, Y), and offsets of the center point relative to four corner points (X1, Y1, X2, Y2, X3, Y3, X4, Y4). In some embodiments, a pixel point with the greatest probability of a space in the output image may be determined as a center point of the target space. For example, the neural network model 1100 can output an output image 930 labeled with empty space corner points based on the input image 910.
One example of a feature map 1180 is an entry 1181, where the entry 1181 includes a probability P that the pixel point is located on the left carport line l And the probability P that the position (X1, Y1) and pixel point are positioned on the right parking space line r And positions (X2, Y2). Then, a left parking space line is determined based on pixels in the output image having a probability of the pixels being on the left parking space line greater than a first probability threshold, and a right parking space line is determined based on pixels in the output image having a probability of the pixels being on the right parking space line greater than a second probability threshold. In this way, the respective pixel point sets on the left parking space line and the right parking space line can be determined, and then the straight line of the left parking space line and the straight line of the right parking space line can be obtained by straight line fitting of the pixel point sets in the world coordinate system. Next, by correcting the corner points of the empty parking space by two straight lines (for example, the points are projected onto the straight lines) in the world coordinate system, the positions of the corner points of the parking space can be corrected, thereby obtaining more accurate corner points of the parking space.
Therefore, the embodiment of the disclosure corrects the parking space corner points by using the relatively stable parking space lines, and the combined detection method can improve the corner point detection precision and reduce the corner point shaking. In addition, the method of passing through the neural network model of some embodiments of the present disclosure is faster in processing speed.
Fig. 12 shows a block diagram of another apparatus 1200 for detecting a parking spot according to an embodiment of the present disclosure. As shown in fig. 12, the apparatus 1200 includes an image obtaining module 1210, a corner and bin line detection module 1220, and a corner correction module 1230. The image acquisition module 1210 is configured to acquire an input image presenting a target parking space to be detected. The corner and spot line detection module 1220 is configured to detect corner and spot lines of a target spot based on the input image. The corner correction module 1230 is configured to correct the position of the detected corner based on the detected car space line.
In some embodiments, the corner and space line detection module 1220 may include: the point set detection module is configured to detect a point set on an inner edge line of a long parking space line of a target parking space, wherein the target parking space comprises two long parking space lines and two short parking space lines; and a straight line fitting module configured to generate two straight lines by straight line fitting to the set of points detected on each long parking spot line in the world coordinate system.
In some embodiments, the corner correction module 1230 may include: a projection module configured to project each detected corner point onto a closer one of the two generated straight lines, respectively, in a world coordinate system; a projection point determining module configured to determine a projection point of each corner point on a closer straight line as a new corner point in a world coordinate system; and a position determining module configured to determine a position of the target parking space based on the new corner point.
In some embodiments, wherein the image acquisition module 1210 may comprise: the image capturing module is configured to determine that a target parking space is a blank space according to a blank space searching stage of autonomous parking, enter a warehousing stage of autonomous parking and capture an input image through an image acquisition device of the vehicle.
In some embodiments, the corner and bin line detection module 1220 may be included in a neural network model, and the apparatus 1200 may further include: the training image acquisition module is configured to acquire a first training image marked with four corner points of each parking space and a second training image marked with two long parking space lines of each parking space; and a joint training module configured to joint train the neural network model using the first training image and the second training image.
In some embodiments, the corner and space line detection module 1220 may include: an output image obtaining module configured to volume and downsample an input image using the neural network model to obtain an output image; an attribute set determination module configured to determine an attribute set for each pixel point in the output image using the neural network model; and the corner and parking space line determining module is configured to determine the corner and the parking space line of the target parking space based on the attribute set of each pixel point.
In some embodiments, wherein the attribute set determination module may include: a first attribute set determining module configured to determine a first attribute set of each pixel point in the output image using the neural network model, wherein the first attribute set includes a space probability, a center point position, and an offset of the center point relative to the four corner points; and a second attribute set determination module configured to determine a second attribute set for each pixel in the output image using the neural network model, wherein the second attribute set includes a probability that the pixel is located on the first long parking spot line and a probability that the pixel is located on the second long parking spot line.
In some embodiments, the corner and space line determining module may include: the pixel point determining module is configured to determine one pixel point with the maximum probability of the hollow parking space in the output image; and the angular point position determining module is configured to determine the position of the angular point of the target parking space based on the first attribute set of one pixel point with the maximum probability of the empty parking space.
In some embodiments, the corner and space line determining module may include: the first long parking space line determining module is configured to determine a first long parking space line based on pixel points, in the output image, of which the probability that the pixel points are positioned on the first long parking space line is greater than a first probability threshold; and the second long parking space line determining module is configured to determine a second long parking space line based on the pixel points, in the output image, of which the probability that the pixel points are positioned on the second long parking space line is greater than a second probability threshold.
In addition, in some embodiments of the present disclosure, the above-mentioned various parking space sensing functions (such as corner detection, empty space judgment, and parking space line detection) may be fused into one lightweight neural network model, so that the relevant sensing functions of the parking space can be completed in real time, without additionally designing an obstacle detection model (which generally takes longer time in the conventional model) to assist in completing the empty space judgment. In addition, as can be seen from the model structure, when the probability of detecting the empty parking space at a certain position in the image is larger than a threshold value, accurate position information of four corner points of the corresponding parking space can be conveniently obtained (in the feature map, the information is bound in the same vector), so that the redundant operation of detecting the corner points first and then clustering is omitted, and each parking space information is ensured to be complete. And relative to the representation form of the rectangular frame, the representation form of the corner coordinates can accurately describe the position and the orientation of the parking space. In addition, the parking space line detection is used as an additional sensing module, so that the stability and accuracy of the parking space line detection greatly improve the position accuracy of the vehicle in the warehouse entry process in the autonomous parking process, and further the phenomenon that the vehicle is not biased in the parking process is further ensured.
It should be appreciated that the parking spot detection method of embodiments of the present disclosure may be implemented at the vehicle, in a remote server or cloud, or partially locally at the vehicle and partially in the remote server.
Fig. 13 shows a schematic block diagram of an example device 1300 that may be used to implement embodiments of the present disclosure. It should be appreciated that apparatus 1300 may be used to implement the devices 700 and 1200 for detecting a parking spot described in this disclosure. As shown, the device 1300 includes a Central Processing Unit (CPU) 1301 that can perform various suitable actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 1302 or loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data required for the operation of the device 1300 can also be stored. The CPU 1301, ROM 1302, and RAM 1303 are connected to each other through a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
Various components in device 1300 are connected to I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, etc.; and a communication unit 1309 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1309 allows the device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Processing unit 1301 performs the various methods and processes described above, such as methods 200, 500, and 800. For example, in some embodiments, methods 200, 500, and 800 may be implemented as computer software programs tangibly embodied on a machine-readable medium, such as storage unit 1308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1300 via the ROM 1302 and/or the communication unit 1309. When the computer program is loaded into RAM 1303 and executed by CPU 1301, one or more actions or steps of methods 200, 500, and 800 described above may be performed. Alternatively, in other embodiments, CPU 1301 may be configured to perform the various methods of embodiments of the present disclosure in any other suitable manner (e.g., by means of firmware).
It should be appreciated that a vehicle according to an embodiment of the present disclosure may include an apparatus 1300 according to that shown in fig. 13.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and so forth.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Moreover, although the acts or steps are depicted in a particular order, this should be understood as requiring that such acts or steps be performed in the particular order shown or in sequential order, or that all illustrated acts or steps be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although embodiments of the disclosure have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (17)

1. A method for detecting a parking spot, comprising:
obtaining an input image representing one or more parking spaces, wherein the one or more parking spaces comprise target parking spaces to be detected;
detecting a center point of the target parking space and offset of the center point relative to a corner point of the target parking space in the input image; and
determining the position of the corner point of the target parking space based on the position of the center point and the offset;
the detecting the center point of the target parking space and the offset of the center point relative to the corner point of the target parking space comprises:
determining a central area of the target parking space based on the central point; and
and determining the probability that the target parking space is an empty parking space based on the central area, wherein the probability that the target parking space is the empty parking space is determined by a neural network model based on the central area.
2. The method of claim 1, further comprising:
determining whether the probability of the target parking space being an empty parking space is greater than a preset threshold value;
according to the fact that the probability is larger than the preset threshold, converting the position of the corner point of the target parking space from an image coordinate to a world coordinate; and
and outputting the world coordinates of the corner points of the target parking space, so that the vehicle is switched from a vacant parking space searching stage to a warehousing stage of autonomous parking.
3. The method of claim 1, wherein obtaining an input image presenting one or more spaces comprises:
capturing the input image by an image acquisition device of a vehicle; and
and determining a target parking space to be detected in the one or more parking spaces based on the spatial relationship between the vehicle and the one or more parking spaces.
4. The method of claim 1, the center point and the offset being determined by the neural network model based on the input image, the method further comprising:
obtaining training images marked with four corner points of each parking space; and
the neural network model is trained using the training images.
5. The method of claim 4, wherein detecting a center point of the target parking space and an offset of the center point relative to a corner point of the target parking space comprises:
Rolling and downsampling the input image using the neural network model to obtain an output image; and
and determining an attribute set of each pixel point in the output image by using the neural network model, wherein the attribute set comprises the empty space probability, the position of the center point and the offset of the center point relative to the four corner points.
6. The method of claim 5, wherein detecting a center point of the target parking space and an offset of the center point relative to a corner point of the target parking space further comprises:
and determining one pixel point with the maximum probability of the hollow parking space in the output image as the center point of the target parking space.
7. The method of claim 5, wherein detecting a center point of the target parking space and an offset of the center point relative to a corner point of the target parking space further comprises:
determining one or more pixel points of the output image, wherein the probability of the empty parking space in the pixel points is larger than a probability threshold; and
and respectively determining the one or more pixel points as one or more center points of one or more target parking spaces.
8. An apparatus for detecting a parking spot, comprising:
an image acquisition module configured to acquire an input image presenting one or more parking spaces including a target parking space to be detected;
The central point detection module is configured to detect a central point of the target parking space and an offset of the central point relative to a corner point of the target parking space in the input image; and
the angular point determining module is configured to determine the position of the angular point of the target parking space based on the position of the central point and the offset;
wherein the center point detection module comprises:
a central area determining module configured to determine a central area of the target parking space based on the central point; and
and the empty space probability determination module is configured to determine the probability that the target parking space is an empty space based on the central area, and the probability that the target parking space is the empty space is determined by a neural network model based on the central area.
9. The apparatus of claim 8, further comprising:
the probability judging module is configured to determine whether the probability of the target parking space being an empty parking space is larger than a preset threshold value;
a coordinate conversion module configured to convert the position of the corner point of the target parking space from image coordinates to world coordinates in accordance with a determination that the probability is greater than the predetermined threshold; and
and the stage switching module is configured to output the world coordinates of the corner points of the target parking space so that the vehicle is switched from a vacant parking space searching stage to a warehousing stage of autonomous parking.
10. The apparatus of claim 8, wherein the image acquisition module comprises:
an image capturing module configured to capture the input image by an image capturing device of a vehicle; and
and the target parking space determining module is configured to determine a target parking space to be detected in the one or more parking spaces based on the spatial relationship between the vehicle and the one or more parking spaces.
11. The apparatus of claim 8, the central point detection module included in the neural network model, the apparatus further comprising:
the training data acquisition module is configured to acquire training images marked with four corner points of each parking space; and
a model training module is configured to train the neural network model using the training image.
12. The apparatus of claim 11, wherein the center point detection module comprises:
an output image obtaining module configured to volume and downsample the input image using the neural network model to obtain an output image; and
and an attribute set determining module configured to determine an attribute set of each pixel point in the output image using the neural network model, the attribute set including a space probability, a position of a center point, and an offset of the center point relative to the four corner points.
13. The apparatus of claim 12, wherein the center point detection module further comprises:
the first determining module is configured to determine one pixel point with the largest probability of the hollow parking space in the output image as the center point of the target parking space.
14. The apparatus of claim 12, wherein the center point detection module further comprises:
the second determining module is configured to determine one or more pixel points, of which the probability of the empty parking space in the output image is greater than a probability threshold; and
and the third determining module is configured to determine the one or more pixel points as one or more center points of one or more target parking spaces respectively.
15. An electronic device, the electronic device comprising:
one or more processors; and
storage means for storing one or more programs that when executed by the one or more processors cause the electronic device to implement the method of any of claims 1-7.
16. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method according to any of claims 1-7.
17. A vehicle comprising the electronic device according to claim 15.
CN201911019857.7A 2019-10-24 2019-10-24 Method, device, equipment, storage medium and vehicle for detecting parking space Active CN110969655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911019857.7A CN110969655B (en) 2019-10-24 2019-10-24 Method, device, equipment, storage medium and vehicle for detecting parking space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911019857.7A CN110969655B (en) 2019-10-24 2019-10-24 Method, device, equipment, storage medium and vehicle for detecting parking space

Publications (2)

Publication Number Publication Date
CN110969655A CN110969655A (en) 2020-04-07
CN110969655B true CN110969655B (en) 2023-08-18

Family

ID=70029859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911019857.7A Active CN110969655B (en) 2019-10-24 2019-10-24 Method, device, equipment, storage medium and vehicle for detecting parking space

Country Status (1)

Country Link
CN (1) CN110969655B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597993B (en) * 2020-05-15 2023-09-05 北京百度网讯科技有限公司 Data processing method and device
CN112016389A (en) * 2020-07-14 2020-12-01 深圳市裕展精密科技有限公司 Control apparatus and method for vehicle
CN112329601B (en) * 2020-11-02 2024-05-07 东软睿驰汽车技术(沈阳)有限公司 Parking space detection method and device based on multitasking network
CN112308867B (en) * 2020-11-10 2022-07-22 上海商汤智能科技有限公司 Tooth image processing method and device, electronic equipment and storage medium
CN112562391B (en) * 2020-11-30 2022-10-14 广州小鹏自动驾驶科技有限公司 Parking space updating method and device
CN112560689B (en) * 2020-12-17 2024-04-19 广州小鹏自动驾驶科技有限公司 Parking space detection method and device, electronic equipment and storage medium
CN112927552B (en) * 2021-01-20 2022-03-11 广州小鹏自动驾驶科技有限公司 Parking space detection method and device
CN115131762A (en) * 2021-03-18 2022-09-30 广州汽车集团股份有限公司 Vehicle parking method, system and computer readable storage medium
CN113096436B (en) * 2021-03-25 2022-12-23 建信金融科技有限责任公司 Indoor parking method and device
WO2022222036A1 (en) * 2021-04-20 2022-10-27 深圳市大疆创新科技有限公司 Method and apparatus for determining parking space
CN113408514A (en) * 2021-06-16 2021-09-17 超级视线科技有限公司 Method and device for detecting roadside parking lot berth based on deep learning
WO2022266854A1 (en) * 2021-06-22 2022-12-29 华为技术有限公司 Parking space detection method and device
CN113822179B (en) * 2021-09-06 2024-05-21 北京车和家信息技术有限公司 Method and device for detecting position of car stopper, electronic equipment and medium
CN115223132B (en) * 2021-11-10 2023-10-27 广州汽车集团股份有限公司 Empty space recognition method and system and computer readable storage medium
CN113901961B (en) * 2021-12-02 2022-03-25 禾多科技(北京)有限公司 Parking space detection method, device, equipment and storage medium
CN114708571A (en) * 2022-03-07 2022-07-05 深圳市德驰微视技术有限公司 Parking space marking method and device for automatic parking based on domain controller platform
CN114882733B (en) * 2022-03-15 2023-12-01 深圳市德驰微视技术有限公司 Parking space acquisition method based on domain controller, electronic equipment and storage medium
CN114821540B (en) * 2022-05-27 2023-03-24 禾多科技(北京)有限公司 Parking space detection method and device, electronic equipment and computer readable medium
CN115206130B (en) * 2022-07-12 2023-07-18 合众新能源汽车股份有限公司 Parking space detection method, system, terminal and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310199A (en) * 2013-06-17 2013-09-18 武汉大学 Vehicle model identification method based on high-resolution remote sensing data
CN105975941A (en) * 2016-05-31 2016-09-28 电子科技大学 Multidirectional vehicle model detection recognition system based on deep learning
CN108090455A (en) * 2017-12-27 2018-05-29 北京纵目安驰智能科技有限公司 Parking stall line vertex localization method, system, terminal and medium based on cascade mechanism
CN108564814A (en) * 2018-06-06 2018-09-21 清华大学苏州汽车研究院(吴江) A kind of parking position detection method and device based on image
CN108910560A (en) * 2018-09-29 2018-11-30 浙江明度智控科技有限公司 A kind of industry loading vehicles positioning device and method
CN109685000A (en) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 A kind of method for detecting parking stalls and device of view-based access control model
CN208856543U (en) * 2018-09-29 2019-05-14 浙江明度智控科技有限公司 A kind of industry loading vehicles positioning device
CN109859260A (en) * 2017-11-30 2019-06-07 华为技术有限公司 Determine the method, apparatus and computer readable storage medium of parking stall position
CN109918977A (en) * 2017-12-13 2019-06-21 华为技术有限公司 Determine the method, device and equipment of free time parking stall
CN109927715A (en) * 2019-02-19 2019-06-25 惠州市德赛西威智能交通技术研究院有限公司 Vertical method of parking
CN109949365A (en) * 2019-03-01 2019-06-28 武汉光庭科技有限公司 Vehicle designated position parking method and system based on road surface characteristic point
CN110246183A (en) * 2019-06-24 2019-09-17 百度在线网络技术(北京)有限公司 Ground contact point detection method, device and storage medium
CN110276293A (en) * 2019-06-20 2019-09-24 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110348297A (en) * 2019-05-31 2019-10-18 纵目科技(上海)股份有限公司 A kind of detection method, system, terminal and the storage medium of parking systems for identification

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202004009022U1 (en) * 2004-06-07 2004-09-09 Müller, Wolfgang T. Elevator shaft for self-driving cabins
JP6083747B2 (en) * 2012-10-24 2017-02-22 国立研究開発法人産業技術総合研究所 Position and orientation detection system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310199A (en) * 2013-06-17 2013-09-18 武汉大学 Vehicle model identification method based on high-resolution remote sensing data
CN105975941A (en) * 2016-05-31 2016-09-28 电子科技大学 Multidirectional vehicle model detection recognition system based on deep learning
CN109859260A (en) * 2017-11-30 2019-06-07 华为技术有限公司 Determine the method, apparatus and computer readable storage medium of parking stall position
CN109918977A (en) * 2017-12-13 2019-06-21 华为技术有限公司 Determine the method, device and equipment of free time parking stall
CN108090455A (en) * 2017-12-27 2018-05-29 北京纵目安驰智能科技有限公司 Parking stall line vertex localization method, system, terminal and medium based on cascade mechanism
CN108564814A (en) * 2018-06-06 2018-09-21 清华大学苏州汽车研究院(吴江) A kind of parking position detection method and device based on image
CN208856543U (en) * 2018-09-29 2019-05-14 浙江明度智控科技有限公司 A kind of industry loading vehicles positioning device
CN108910560A (en) * 2018-09-29 2018-11-30 浙江明度智控科技有限公司 A kind of industry loading vehicles positioning device and method
CN109685000A (en) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 A kind of method for detecting parking stalls and device of view-based access control model
CN109927715A (en) * 2019-02-19 2019-06-25 惠州市德赛西威智能交通技术研究院有限公司 Vertical method of parking
CN109949365A (en) * 2019-03-01 2019-06-28 武汉光庭科技有限公司 Vehicle designated position parking method and system based on road surface characteristic point
CN110348297A (en) * 2019-05-31 2019-10-18 纵目科技(上海)股份有限公司 A kind of detection method, system, terminal and the storage medium of parking systems for identification
CN110276293A (en) * 2019-06-20 2019-09-24 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN110246183A (en) * 2019-06-24 2019-09-17 百度在线网络技术(北京)有限公司 Ground contact point detection method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的车位智能检测方法;徐乐先等;《 中国激光》;第230-241页 *

Also Published As

Publication number Publication date
CN110969655A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN110796063B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN110588653B (en) Control system, control method and controller for autonomous vehicle
EP3732657B1 (en) Vehicle localization
CN110531753B (en) Control system, control method and controller for autonomous vehicle
US11682129B2 (en) Electronic device, system and method for determining a semantic grid of an environment of a vehicle
Furgale et al. Toward automated driving in cities using close-to-market sensors: An overview of the v-charge project
Heng et al. Autonomous visual mapping and exploration with a micro aerial vehicle
US20220245952A1 (en) Parking spot detection method and parking spot detection system
Shim et al. An autonomous driving system for unknown environments using a unified map
CN112212872B (en) End-to-end automatic driving method and system based on laser radar and navigation map
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
CN111238494A (en) Carrier, carrier positioning system and carrier positioning method
CN113561963B (en) Parking method and device and vehicle
WO2020150904A1 (en) Neural network based obstacle detection for mobile platforms, and associated systems and methods
CN112378397B (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
US20210272289A1 (en) Sky determination in environment detection for mobile platforms, and associated systems and methods
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
Mutz et al. Following the leader using a tracking system based on pre-trained deep neural networks
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
Song et al. Real-time localization measure and perception detection using multi-sensor fusion for Automated Guided Vehicles
CN117554989A (en) Visual fusion laser radar SLAM positioning navigation method and unmanned aerial vehicle system thereof
CN113673462A (en) Logistics AGV positioning method based on lane line
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant