WO2021184616A1 - 一种车位检测方法、装置、设备及存储介质 - Google Patents

一种车位检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021184616A1
WO2021184616A1 PCT/CN2020/102516 CN2020102516W WO2021184616A1 WO 2021184616 A1 WO2021184616 A1 WO 2021184616A1 CN 2020102516 W CN2020102516 W CN 2020102516W WO 2021184616 A1 WO2021184616 A1 WO 2021184616A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
space detection
center point
probability
point
Prior art date
Application number
PCT/CN2020/102516
Other languages
English (en)
French (fr)
Inventor
吕晋
刘威
胡骏
Original Assignee
东软睿驰汽车技术(沈阳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东软睿驰汽车技术(沈阳)有限公司 filed Critical 东软睿驰汽车技术(沈阳)有限公司
Priority to JP2022555182A priority Critical patent/JP7400118B2/ja
Priority to DE112020006935.4T priority patent/DE112020006935T5/de
Priority to US17/911,406 priority patent/US20230102253A1/en
Publication of WO2021184616A1 publication Critical patent/WO2021184616A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the field of image processing technology, and in particular to a parking space detection method, device, equipment and storage medium.
  • Automatic parking is an important part of autonomous driving technology.
  • the realization of automatic parking needs to rely on accurate parking information.
  • the existing parking space detection technology usually divides the parking space detection into multiple subtasks for execution. For example, first detect the intersection of lines in the image, and two adjacent intersections are used as a pair of intersections; then, use the area formed by the intersection pair to obtain parking information, including the type of parking space and the angle of the parking space; finally, use The parking space information estimates the other two points of the parking space and gives the complete information of the parking space.
  • the parking space detection is realized in multiple stages, and the detection speed is relatively slow. And it is impossible to detect multiple types of parking spaces at the same time.
  • the present application provides a parking space detection method, device, equipment, and storage medium, which simplify the complexity of parking space detection, increase the detection speed, and can detect multiple types of parking spaces at the same time.
  • this application provides a parking space detection method, including:
  • the center point probability map includes: the predicted probability of each pixel in the top view image as a center point; the interior corner point probability The map includes: the predicted probability of each pixel in the overhead image as an inner corner point, and different inner corner point probability maps correspond to different types of inner corner points of the parking space;
  • the obtaining the parking space detection result of the overhead image according to the center point probability map and the four inner corner point probability maps specifically includes:
  • the parking space detection result of the overhead image is obtained.
  • the determining the number and location information of the center points according to the center point probability map specifically includes:
  • the coordinates of the pixel point as the center point in the image coordinate system are converted to the scene coordinate system to obtain the position information of the center point in the scene coordinate system.
  • the determining the center point according to the center point probability map specifically includes:
  • the central point is determined according to the central point probability map and combined with the maximum value suppression algorithm.
  • the obtaining the parking space detection result of the overhead image specifically includes:
  • obtaining a top view image of the scene specifically includes:
  • the panoramic image is converted into the overhead image.
  • the above method further includes:
  • the parking space detection result is sent to the automatic parking assist module of the vehicle, so that the automatic parking assist module plans a parking path and performs parking control according to the parking space detection result.
  • a parking space detection device including:
  • a bird's-eye view image acquisition module which is used to acquire a bird's-eye view image of the scene
  • the probability map acquisition module is used to learn the top view image to obtain a center point probability map and four inner corner point probability maps;
  • the center point probability map includes: prediction of each pixel in the top view image as a center point Probability;
  • the interior corner point probability map includes: the predicted probability of each pixel in the overhead image as an interior corner point, and different interior corner point probability maps correspond to different types of interior corner points of the parking space;
  • the parking space detection result acquisition module is configured to obtain the parking space detection result of the overhead image according to the center point probability map and the four inner corner point probability maps.
  • the parking space detection result acquisition module specifically includes:
  • the first determining unit is configured to determine the number and position information of the center points according to the center point probability map
  • the second determining unit is configured to determine the inner corner points related to the center point from the four inner corner point probability maps;
  • the detection result acquisition unit is used to obtain the parking space detection of the overhead image by using the determined number and position information of the center point, the position information of the inner corner point related to the center point, and the association relationship between the center point and the inner corner point result.
  • the first determining unit is specifically configured to:
  • the coordinates of the pixel point as the center point in the image coordinate system are converted to the scene coordinate system to obtain the position information of the center point in the scene coordinate system.
  • the first determining unit is specifically configured to:
  • the central point is determined according to the central point probability map and combined with the maximum value suppression algorithm.
  • the parking space detection result acquisition module is specifically used for:
  • the top view image acquisition module specifically includes:
  • a preliminary image acquisition unit for obtaining a preliminary image of the scene by using a camera device installed on the vehicle;
  • An image splicing unit for splicing the preliminary image of the scene into a panoramic image
  • the image conversion unit is used to convert the panoramic image into the overhead image.
  • Parking space detection results can be applied to the field of automatic parking.
  • the automatic parking function of the vehicle can provide users with more safe and reliable automatic parking services. Therefore, optionally, the aforementioned parking space detection device further includes:
  • the sending module is used to send the parking space detection result to the automatic parking assist module of the vehicle, so that the automatic parking assist module plans a parking path and performs parking control according to the parking space detection result.
  • the present application provides a device including: a processor and a memory:
  • the memory is used to store a computer program
  • the processor is configured to execute the parking space detection method provided in the first aspect according to the computer program.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and the computer program is used to execute the parking space detection method provided in the first aspect.
  • the parking space detection method, device, equipment and storage medium provided in this application.
  • the top view image of the scene is learned to obtain five probability maps.
  • the five probability maps specifically include a center point probability map and four interior corner point probability maps. Since the center point probability map reflects the predicted probability of each pixel in the top view image of the scene as the center point of the parking space, each interior corner probability map reflects the predicted probability of each pixel in the top view image of the scene as a certain internal corner point of the parking space, and each There is a geometric connection (location connection) between the inner corner point and the center point of the parking space. Based on this, the image output by the model is used to obtain the parking space detection result.
  • This application realizes the rapid detection of parking spaces in a single stage through the overhead image of the learning scene.
  • this application is not limited by the types of parking spaces, so multiple types of parking spaces can be detected at the same time, which improves the efficiency of parking space detection.
  • FIG. 1 is a flowchart of a parking space detection method provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of using a stacked network model provided by an embodiment of the application
  • FIG. 3 is a flowchart of another parking space detection method provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a parking space detection device provided by an embodiment of the application.
  • the current parking space detection scheme has the problems of complex detection methods and slow implementation speed due to multi-stage detection. And it is impossible to detect different types of parking spaces at the same time.
  • the inventor has provided a parking space detection method, device, equipment, and storage medium after research.
  • This application obtains a center point probability map and four inner corner point probability maps by learning the top view image of the scene. The five images are used to obtain the parking space detection result corresponding to the overhead image.
  • This application simplifies the process of parking space detection, thereby increasing the speed of parking space detection.
  • the present application uses the center point probability map and the inner corner point probability map to detect parking spaces in the scene, which is not limited by the type of parking spaces, so that multiple types of parking spaces can be detected at the same time, and the detection efficiency is improved.
  • FIG. 1 is a flowchart of a parking space detection method provided by an embodiment of the application. As shown in Figure 1, the method includes:
  • Step 101 Obtain a top view image of the scene.
  • the scene where parking space detection needs to be performed may be a parking lot, or may be a road with a planned parking space or near a store.
  • the reason for obtaining the top view image is that the top view can more accurately identify the signs related to the parking space on the ground, and the probability of the signs being distorted and deformed in the top view image is less. Therefore, using the top view image is very beneficial to improve the accuracy and precision of parking space detection.
  • the top view image of the scene can be obtained in a variety of ways.
  • a drone is used to take aerial photography over the scene to obtain a top view image.
  • a camera device installed on the vehicle is used to obtain a preliminary image of the scene. It is understandable that the camera device installed on the vehicle cannot achieve top-down photography that is completely perpendicular to the ground.
  • preliminary images of the scene can be stitched into a panoramic image. There are many ways to realize the stitching of panoramic images, and the stitching method is not limited here.
  • the panoramic image is converted into a bird's-eye view image.
  • Step 102 Learning the top view image to obtain a center point probability map and four inner corner point probability maps.
  • a stacked network model is used to learn the top view image to obtain a center point probability map and four inner corner point probability maps.
  • the stacking network model has been pre-trained, and the model is used to output five map maps that are an important basis for parking space detection. They are: a center point probability map (or It is called the center point map) and four inner corner point probability maps (or called the inner corner point map).
  • Each parking space can be embodied as a rectangular outline or a parallelogram outline. For ease of understanding, the following first explains the meaning of the inner corner point and the center point.
  • first interior corner point and the second inner corner point are the two inner corner points in front of the parking space
  • the third inner corner point and the fourth inner corner point are the two inner corner points behind the parking space.
  • the line connecting the first inner corner point and the second inner corner point is parallel to the line connecting the third inner corner point and the fourth inner corner point, and the length of the two lines is equal; the line connecting the first inner corner point and the third inner corner point is parallel to the second inner corner point
  • the connecting line between the inner corner point and the fourth inner corner point has the same length.
  • the connecting line between the first inner corner point and the second inner corner point is shorter than the connecting line between the first inner corner point and the third inner corner point.
  • first inner corner point and the fourth inner corner point are respectively located on the opposite corners of the parking space; the second inner corner point and the third inner corner point are respectively located on the opposite corners of the parking space.
  • the intersection of the line between the first inner corner point and the fourth inner corner point and the line between the second inner corner point and the third inner corner point is the center point of the parking space.
  • the structure of the stacked network model is mainly designed in a bottom-up manner.
  • FIG. 2 is a schematic diagram of using a stacked network model according to an embodiment of the application.
  • the input of the stacked network model is the top view image of the scene; the output of the stacked network model is five map images.
  • the processing process of the stacked network model on the top view image is actually the process of extracting image features.
  • the center point probability map output by the stacked network model includes: the predicted probability of each pixel in the top view image as the center point.
  • the pixel value of each pixel in the central point probability map is: the predicted probability of each pixel in the original overhead image as the central point.
  • the pixel value of the pixel with the coordinate value (x1, y1) is 0.85, which means that the predicted probability of the pixel with the coordinate value (x1, y1) in the overhead image as the center point of the parking space is 0.85.
  • the inner corner point probability map output by the stacked network model includes: the predicted probability of each pixel in the top view image as the inner corner point. That is to say, the pixel value of each pixel in the interior corner probability map is the predicted probability of each pixel in the original overhead image as the interior corner.
  • the first inner corner point, the second inner corner point, the third inner corner point, and the fourth inner corner point respectively correspond to a different inner corner point probability map of the four inner corner point probability maps.
  • the pixel value of the pixel with the coordinate value (x2, y2) in the probability map of the inner corner point corresponding to the first inner corner point is 0.01, which means that the pixel with the coordinate value (x2, y2) in the overhead image is regarded as the first inner corner of the parking space.
  • the predicted probability of the point is 0.85.
  • Step 103 Obtain the parking space detection result of the overhead image according to the center point probability map and the four inner corner point probability maps.
  • the center point probability map can be used to first determine the point with a higher predicted probability as the center point of the parking space.
  • using the interior corner point probability map can first determine the point with a higher predicted probability of a certain type of interior corner point as a parking space.
  • the result of the parking space detection on the overhead image is obtained.
  • the position information of the center point can be embodied as the position information in the overhead image, or can be embodied as the position information in the scene. After determining the position information of the center point, the position information of the four inner corner points related to each center point can be determined according to the inherent position relationship between the center point and the inner corner point on the same parking space.
  • the parking space detection result can include a variety of content. For example, obtain the number and positions of all parking spaces in the scene; or, obtain the number and positions of all empty parking spaces in the scene; or, obtain the number and positions of empty parking spaces in the scene that meet the preset requirements.
  • the location of the parking space can be expressed in the following ways:
  • L center , L inner1 , L inner2 , L inner3 , and L inner4 indicate the position of the center point of the parking space, the position of the first inner corner point, the position of the second inner corner point, and the position of the third inner corner point in turn.
  • the position of the fourth inner corner point is the position of the center point of the parking space.
  • the parking space detection results can be fed back according to the preset requirements:
  • the parking space detection result includes the number and positions of empty parking spaces that meet the above requirements.
  • the preset request is to provide the location of the vacant parking space closest to the entrance/exit of the parking lot, then the parking space detection result includes the location of the vacant parking space that meets this preset requirement.
  • the parking space detection method provided by the embodiment of this application.
  • feature extraction is performed on the top view image to obtain a center point probability map and four inner corner point probability maps.
  • the five images are used to obtain the parking space detection results for the top view image.
  • This method can detect multiple types of parking spaces at the same time.
  • this method directly obtains multiple probability maps by learning the top view image, which is convenient and rapid, and improves the efficiency of parking space detection.
  • FIG. 3 is a flowchart of another parking space detection method provided by an embodiment of the application.
  • the method includes:
  • Step 301 Obtain a top view image of the scene.
  • Step 302 Perform learning on the top view image to obtain a center point probability map and four inner corner point probability maps.
  • Step 303 Determine the number and location information of the center points according to the center point probability map.
  • the first preset probability value can be set according to actual needs, for example, it can be set to 0.7 or 0.75.
  • the central point can also be determined according to the central point probability map and combined with the maximum value suppression algorithm.
  • the maximum value suppression algorithm belongs to a relatively mature algorithm in the field, so it will not be repeated here.
  • the coordinates of the pixel point as the center point in the image coordinate system are converted to the scene coordinate system, and the position information of the center point in the scene coordinate system is obtained.
  • the scene coordinate system relative to the image coordinate system transformation matrix is E, the center point coordinates of the image coordinate system by multiplying the transformation matrix P A and E, to obtain the position information of the center point P B in the actual scene.
  • Step 304 Determine the inner corner points related to the center point from the four inner corner point probability maps.
  • Step 305 Use the determined number and position information of the center points, the position information of the inner corner points related to the center point, and the association relationship between the center points and the inner corner points to obtain the parking space detection result of the overhead image.
  • the parking space detection result of the overhead image is obtained.
  • Parking space detection results can be applied to the field of automatic parking.
  • the automatic parking function of the vehicle can provide users with more safe and reliable automatic parking services. Therefore, after the parking space detection result is obtained, the method provided in this embodiment may further include:
  • Step 306 Send the parking space detection result to the automatic parking assist module of the vehicle, so that the automatic parking assist module plans a parking path and performs parking control according to the parking space detection result.
  • the automatic parking assist module can use the effective parking position to establish a smoother parking path, and perform parking control according to the parking path to smoothly park the vehicle into an empty parking space or an empty parking space that meets the preset requirements.
  • the parking control may specifically include: controlling the gear position and wheel speed of the vehicle.
  • the specific implementation content of parking control is not limited here.
  • the present application also provides a parking space detection device.
  • the following description will be given in conjunction with the embodiments.
  • FIG. 4 is a schematic structural diagram of a parking space detection device provided by an embodiment of the application.
  • the parking space detection device includes:
  • a bird's-eye view image acquisition module 401 configured to acquire a bird's-eye view image of the scene
  • the probability map acquisition module 402 is used to learn the top view image to obtain a center point probability map and four inner corner point probability maps output by the stacked network model;
  • the center point probability map includes: each pixel in the top view image is used as a center point prediction Probability;
  • the interior corner point probability map includes: the predicted probability of each pixel in the top view image as the interior corner point, and different interior corner point probability maps correspond to different types of interior corner points of the parking space;
  • the parking space detection result obtaining module 403 is used to obtain the parking space detection result of the top view image according to the center point probability map and the four inner corner point probability maps.
  • the five probability maps include a center point probability map and four interior corner point probability maps. Since the center point probability map reflects the predicted probability of each pixel in the top view image of the scene as the center point of the parking space, each interior corner probability map reflects the predicted probability of each pixel in the top view image of the scene as a certain internal corner point of the parking space, and each There is a geometric connection (location connection) between the inner corner point and the center point of the parking space. Based on this, five probability images are used to obtain parking space detection results. Compared with the prior art, the present application realizes the rapid detection of parking spaces in a single stage through the overhead image of the learning scene. In addition, the device for parking space detection is not limited by the type of parking space, so it can detect multiple types of parking spaces at the same time, which improves the efficiency of parking space detection.
  • the parking space detection result obtaining module 403 specifically includes:
  • the first determining unit is configured to determine the number and position information of the center points according to the center point probability map
  • the second determining unit is configured to determine the inner corner points related to the center point from the four inner corner point probability maps;
  • the detection result acquisition unit is used to obtain the parking space detection of the overhead image by using the determined number and position information of the center point, the position information of the inner corner point related to the center point, and the association relationship between the center point and the inner corner point result.
  • the first determining unit is specifically configured to:
  • the coordinates of the pixel point as the center point in the image coordinate system are converted to the scene coordinate system to obtain the position information of the center point in the scene coordinate system.
  • the first determining unit is specifically configured to:
  • the central point is determined according to the central point probability map and combined with the maximum value suppression algorithm.
  • the parking space detection result obtaining module 403 is specifically configured to:
  • the top view image acquisition module 401 specifically includes:
  • a preliminary image acquisition unit for obtaining a preliminary image of the scene by using a camera device installed on the vehicle;
  • An image splicing unit for splicing the preliminary image of the scene into a panoramic image
  • the image conversion unit is used to convert the panoramic image into the overhead image.
  • Parking space detection results can be applied to the field of automatic parking.
  • the automatic parking function of the vehicle can provide users with more safe and reliable automatic parking services. Therefore, optionally, the aforementioned parking space detection device further includes:
  • the sending module 404 is configured to send the parking space detection result to the automatic parking assist module of the vehicle, so that the automatic parking assist module plans a parking path and performs parking control according to the parking space detection result.
  • the present application also provides a device for realizing parking space detection, including: a processor and a memory:
  • the memory is used to store computer programs
  • the processor is configured to execute part or all of the steps in the parking space detection method provided by the method embodiment according to the computer program stored in the memory.
  • the present application further provides a computer-readable storage medium.
  • the computer-readable storage medium is used to store a computer program, and the computer program is used to execute part or all of the steps of the parking space detection method provided by the foregoing method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车位检测方法、装置、设备及存储介质。该方法对场景的俯视图像进行学习,获得五幅概率图。这五幅概率图具体包括一幅中心点概率图和四幅内角点概率图。由于中心点概率图反映出场景俯视图像中各像素点作为车位中心点的预测概率,每幅内角点概率图反映出场景俯视图像中各像素点作为车位某一内角点的预测概率,并且每个车位的内角点和中心点存在几何联系。基于此,利用模型输出的图像获得车位检测结果。该方法通过学习场景的俯视图像,单阶段地实现了车位的快速检测。此外,该方法不受车位类型的限制,因此可以同时对多种类型的车位进行检测,提升了车位检测效率。

Description

一种车位检测方法、装置、设备及存储介质
本申请要求于2020年03月20日提交中华人民共和国国家知识产权局、申请号为202010200852.0、申请名称为“一种车位检测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种车位检测方法、装置、设备及存储介质。
背景技术
自动泊车是自动驾驶技术中的重要一环。自动泊车的实现需要依靠准确的车位信息。目前已有的车位检测技术通常将车位检测分割为多个子任务执行。例如,首先检测图像中线条的交叉点,两个相邻的交叉点作为一个交叉点对;其后,利用交叉点对形成的区域获得车位信息,包括车位的类型以及车位的角度;最后,利用车位信息估计车位的其他两个点,在给出车位的完整信息。
在已有的技术中,车位检测是多阶段地实现的,检测速度较慢。并且也无法对多种类型的车位进行同时检测。
发明内容
基于上述问题,本申请提供了一种车位检测方法、装置、设备及存储介质,简化车位检测的复杂度,提升检测速度,且对多种类型的车位能够同时检测。
本申请实施例公开了如下技术方案:
第一方面,本申请提供一种车位检测方法,包括:
获得场景的俯视图像;
对所述俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图;所述中心点概率图包括:所述俯视图像中各像素点作为中心点的预测概率;所述内角点概率图包括:所述俯视图像中各像素点作为内角点的预测概率,不同的所述内角点概率图对应车位的不同类型的内角点;
根据所述中心点概率图和所述四幅内角点概率图获得对所述俯视图像的车位检测结果。
可选地,所述根据所述中心点概率图和所述四幅内角点概率图获得对所述俯视图像的车位检测结果,具体包括:
根据所述中心点概率图确定中心点的数量和位置信息;
从所述四幅内角点概率图中确定出与中心点相关的内角点;
利用确定出的中心点的数量和位置信息、所述与中心点相关的内角点的位置信息以及中心点与内角点的关联关系,获得对所述俯视图像的车位检测结果。
可选地,所述根据所述中心点概率图确定中心点的数量和位置信息,具体包括:
对所述中心点概率图中的每一个像素点,判断其作为中心点的预测概率是否超过第一预设概率阈值,如果是,则将该像素点确定为中心点,并对中心点的数量加1;
按照所述图像坐标系与场景坐标系的转换关系,将作为中心点的像素点在图像坐标系的坐标转换到所述场景坐标系中,得到中心点在所述场景坐标系中的位置信息。
可选地,所述根据所述中心点概率图确定中心点,具体包括:
根据所述中心点概率图并结合极大值抑制算法确定中心点。
可选地,所述获得对所述俯视图像的车位检测结果,具体包括:
获得所述场景中所有车位的数量和位置;或者,
获得所述场景中所有空车位的数量和位置;或者,
获得所述场景中符合预设要求的空车位的数量和位置。
可选地,获得场景的俯视图像,具体包括:
利用车辆上装设的摄像装置获得所述场景的初步图像;
将所述场景的初步图像拼接为全景图像;
将所述全景图像转化为所述俯视图像。
可选地,以上方法还包括:
将所述车位检测结果发送给车辆的自动泊车辅助模块,以使所述自动泊车辅助模块根据所述车位检测结果规划泊车路径和进行泊车控制。
第二方面,本申请提供一种车位检测装置,包括:
俯视图像获取模块,用于获得场景的俯视图像;
概率图获取模块,用于对所述俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图;所述中心点概率图包括:所述俯视图像中各像素点作为中心点的预测概率;所述内角点概率图包括:所述俯视图像中各像素点作为内角点的预测概率,不同的所述内角点概率图对应车位的不同类型的内角点;
车位检测结果获取模块,用于根据所述中心点概率图和所述四幅内角点概率图获得对所述俯视图像的车位检测结果。
可选地,车位检测结果获取模块,具体包括:
第一确定单元,用于根据所述中心点概率图确定中心点的数量和位置信息;
第二确定单元,用于从所述四幅内角点概率图中确定出与中心点相关的内角点;
检测结果获取单元,用于利用确定出的中心点的数量和位置信息、所述与中心点相关的内角点的位置信息以及中心点与内角点的关联关系,获得对所述俯视图像的车位检测结果。
可选地,第一确定单元具体用于:
对所述中心点概率图中的每一个像素点,判断其作为中心点的预测概率是否超过第一预设概率阈值,如果是,则将该像素点确定为中心点,并对中心点的数量加1;
按照所述图像坐标系与场景坐标系的转换关系,将作为中心点的像素点在图像坐标系的坐标转换到所述场景坐标系中,得到中心点在所述场景坐标系中的位置信息。
可选地,第一确定单元具体用于:
根据所述中心点概率图并结合极大值抑制算法确定中心点。
可选地,车位检测结果获取模块具体用于:
获得所述场景中所有车位的数量和位置;或者,
获得所述场景中所有空车位的数量和位置;或者,
获得所述场景中符合预设要求的空车位的数量和位置。
可选地,俯视图像获取模块,具体包括:
初步图像获取单元,用于利用车辆上装设的摄像装置获得所述场景的初步图像;
图像拼接单元,用于将所述场景的初步图像拼接为全景图像;
图像转化单元,用于将所述全景图像转化为所述俯视图像。
车位检测结果可以应用到自动泊车领域,通过快速、准确地提供车位检测结果,使车辆的自动泊车功能为用户提供更具有安全性和可靠性的自动泊车服务。因此,可选地,前述的车位检测装置,还包括:
发送模块,用于将所述车位检测结果发送给车辆的自动泊车辅助模块,以使所述自动泊车辅助模块根据所述车位检测结果规划泊车路径和进行泊车控制。
第三方面,本申请提供一种设备,包括:处理器以及存储器:
所述存储器用于存储计算机程序;
所述处理器用于根据所述计算机程序执行第一方面提供的车位检测方法。
第四方面,本申请提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于执行第一方面提供的车位检测方法。
相较于现有技术,本申请具有以下有益效果:
本申请提供的车位检测方法、装置、设备及存储介质。本技术方案中,对场景的俯视图像进行学习,获得五幅概率图。这五幅概率图具体包括一幅中心点概率图和四幅内角点概率图。由于中心点概率图反映出场景俯视图像中各像素点作为车位中心点的预测概率,每幅内角点概率图反映出场景俯视图像中各像素点作为车位某一内角点的预测概率,并且每个车位的内角点和中心点存在几何联系(位置联系)。基于此,利用模型输出的图像获得车位检测结果。本申请通过学习场景的俯视图像,单阶段地实现了车位的快速检测。此外,本申请不受车位类型的限制,因此可以同时对多种类型的车位进行检测,提升了车位检测效率。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种车位检测方法的流程图;
图2为本申请实施例提供的一种利用堆叠网络模型的示意图;
图3为本申请实施例提供的另一种车位检测方法的流程图;
图4为本申请实施例提供的一种车位检测装置的结构示意图。
具体实施方式
正如前文描述,目前的车位检测方案由于多阶段实现检测,因此存在检测方法复杂,实现速度慢的问题。并且无法对不同类型的车位同时检测。针对这些问题,发明人经过研究,提供一种车位检测方法、装置、设备及存储介质。本申请通过学习场景的俯视图像,获得一幅中心点概率图和四幅内角点概率图。利用该五幅图获得该俯视图像对应的车位检测结果。本申请将车位检测的流程简化,从而提升了车位检测速度。另外,本申请以中心 点概率图和内角点概率图对场景车位进行检测,不受车位类型的限制,从而可以同时对多种类型的车位进行检测,检测效率提升。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
方法实施例
参见图1,该图为本申请实施例提供的一种车位检测方法的流程图。如图1所示,该方法包括:
步骤101:获得场景的俯视图像。
本实施例中,需要进行车位检测的场景可以是停车场,或者,可以是带有规划的停车位的道路或门店附近。为了实现车位检测,首先需要获得场景的俯视图像。获取俯视图像的原因在于,俯视能够更加准确地识别地面上停车位相关的标识,且标识在俯视图像中畸变和变形的几率较小。因此,利用俯视图像对于提升车位检测的准确度和精度十分有利。
场景的俯视图像可以通过多种方式获得。
作为一示例方式,采用无人机在场景的上空航拍,获得俯视图像。
作为另一示例方式,利用车辆上装设的摄像装置获得场景的初步图像。可以理解的是,车辆上装设的摄像装置无法实现完全垂直于地面的俯视拍摄。为获得俯视图像,在这一示例方式中,可以将场景的初步图像拼接为全景图像。拼接全景图像的实现方式包括多种,此处对拼接方式不进行限定。最终,将全景图像转化为俯视图像。
步骤102:对俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图。
在一种可能的实现方式中,利用堆叠网络模型对俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图。在该实现方式中,在本实施例方法执行之前,已预先训练好堆叠网络模型,该模型用于输出作为车位检测的重要依据的五幅map图,分别是:一幅中心点概率图(或称为中心点map图)和四幅内角点概率图(或称为内角点map图)。
每个车位可以体现为矩形轮廓或平行四边形轮廓。为便于理解,以下首先对内角点和中心点的含义进行说明。
对于每个车位,其包括四个不同类型的内角点,为便于区分将其分别称为第一内角点、第二内角点、第三内角点和第四内角点。作为示例,第一内角点和第二内角点为车位前面两个内角点,第三内角点和第四内角点为车位后面两个内角点。其中第一内角点与第二内角点的连线平行于第三内角点与第四内角点的连线,两连线长度相等;第一内角点与第三内角点的连线平行于第二内角点与第四内角点的连线,两连线长度相等。第一内角点与第二内角点的连线短于第一内角点与第三内角点的连线。
对于每个车位,其存在一个几何学上的中心点。例如,第一内角点和第四内角点分别位于车位的对角;第二内角点和第三内角点分别位于车位的对角。第一内角点和第四内角点的连线与第二内角点和第三内角点的连线的交点为该车位的中心点。
该堆叠网络模型的结构主要采用自底向上的方式设计而成。参见图2,该图为本申请实施例提供的一种利用堆叠网络模型的示意图。如图2所示,该堆叠网络模型的输入为场 景的俯视图像;该堆叠网络模型的输出为五幅map图。堆叠网络模型对俯视图像的处理过程实际上是提取图像特征的过程。
堆叠网络模型输出的中心点概率图包括:俯视图像中各像素点作为中心点的预测概率。也就是说,中心点概率图中的每一个像素点的像素值分别是:原俯视图像中每一个像素点作为中心点的预测概率。例如,中心点概率图中坐标值为(x1,y1)的像素点的像素值为0.85,表示俯视图像中坐标值为(x1,y1)的像素点作为车位中心点的预测概率为0.85。中心点概率图中某一像素点的像素值越大,表示俯视图像中坐标对应的像素点处于某一车位的中心的概率越大;反之,中心点概率图中某一像素点的像素值越小,表示俯视图像中坐标对应的像素点处于某一车位的中心的概率越小。
堆叠网络模型输出的内角点概率图包括:俯视图像中各像素点作为内角点的预测概率。也就是说,内角点概率图中的每一个像素点的像素值分别是原俯视图像中每一个像素点作为内角点的预测概率。本实施例中,第一内角点、第二内角点、第三内角点和第四内角点分别对应的四幅内角点概率图中的一幅不同的内角点概率图。例如,第一内角点对应的内角点概率图中坐标值为(x2,y2)的像素点的像素值为0.01,表示俯视图像中坐标值为(x2,y2)的像素点作为车位第一内角点的预测概率为0.85。第一内角点对应的内角点概率图中,某一像素点的像素值越大,表示俯视图像中坐标对应的像素点处于某一车位的第一内角点的概率越大;反之,第一内角点对应的内角点概率图中,某一像素点的像素值越小,表示俯视图像中坐标对应的像素点处于某一车位的第一内角点的概率越小。
以上利用堆叠网络模型对场景的俯视图像进行学习的实现方式仅为示例。在实际应用中,还可以采用其他结构的模型来学习俯视图像,并获得五幅map图。此处对本步骤的具体实现形式不进行限定。
步骤103:根据中心点概率图和四幅内角点概率图获得对俯视图像的车位检测结果。
可以理解的是,利用中心点概率图可以首先确定出作为车位中心点的预测概率较高的点。同理,利用内角点概率图可以首先确定出作为车位的某一类型内角点的预测概率较高的点。由于实际上某一车位的四个内角点与该车位的中心点存在着几何关系,因此,基于几何关系和上述确定出的作为车位中心点、作为某一类型内角点的概率较高的点,可以最终确定出中心点及其所在车位的内角点。从而,获得对俯视图像的车位检测结果。
具体实现时,一个车位只有一个中心点,因此中心点数量明确后,车位的数量也可相应获得。中心点的位置信息可以体现为在俯视图像中的位置信息,也可以体现为场景中的位置信息。可以在确定了中心点位置信息之后,根据同一车位上中心点与内角点的固有位置联系来确定每一个中心点相关的四个内角点的位置信息。
车位检测结果可以包括多种内容。例如:获得场景中所有车位的数量和位置;或者,获得场景中所有空车位的数量和位置;或者,获得场景中符合预设要求的空车位的数量和位置。
车位的位置具体可以通过以下方式表示:
[L center,L inner1,L inner2,L inner3,L inner4]
在以上表达方式中,L center,L inner1,L inner2,L inner3,L inner4依次表示车位的中心点的位置,第一内角点的位置,第二内角点的位置,第三内角点的位置,第四内角点的位置。
实际应用中,车位检测结果可以根据预设要求进行反馈:
作为一示例:预设要求是提供水平、竖直、斜列的空车位的数量和位置,则车位检测结果包括符合上述要求的空车位的数量和位置。
作为另一示例:预设要求是提供距离停车场入口/出口最近的空车位的位置,则车位检测结果包括符合这一预设要求的空车位的位置。
以上即为本申请实施例提供的车位检测方法。在该方法中,对俯视图像进行特征提取,获得一幅中心点概率图和四幅内角点概率图。最后利用这五幅图获得针对俯视图像的车位检测结果。该方法能够对多种类型的车位同时检测。另外,该方法通过学习俯视图像直接获得多幅概率图,便捷迅速,提升了车位检测效率。
本申请实施例还进一步提供另一种车位检测方法,下面结合附图加以说明。参见图3,该图为本申请实施例提供的另一种车位检测方法的流程图。该方法包括:
步骤301:获得场景的俯视图像。
步骤302:对所述俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图。
步骤303:根据所述中心点概率图确定中心点的数量和位置信息。
本步骤在具体实现时,可以包括:
对中心点概率图中的每一个像素点,判断其作为中心点的预测概率是否超过第一预设概率阈值,如果是,则将该像素点确定为中心点,并对中心点的数量加1;如果否,则继续遍历其他的像素点。第一预设概率值可以根据实际需求进行设置,例如可以设为0.7或0.75。
另外,也可以根据中心点概率图并结合极大值抑制算法确定中心点。极大值抑制算法属于本领域比较成熟的一种算法,故在此不做赘述。
按照图像坐标系与场景坐标系的转换关系,将作为中心点的像素点在图像坐标系的坐标转换到场景坐标系中,得到中心点在场景坐标系中的位置信息。作为示例,场景坐标系相对于图像坐标系的转换矩阵为E,将中心点在图像坐标系的坐标P A与转换矩阵E相乘,即可得到中心点在实际场景中的位置信息P B
步骤304:从所述四幅内角点概率图中确定出与中心点相关的内角点。
步骤305:利用确定出的中心点的数量和位置信息、所述与中心点相关的内角点的位置信息以及中心点与内角点的关联关系,获得对所述俯视图像的车位检测结果。
经过以上步骤301-305,获得对俯视图像的车位检测结果。车位检测结果可以应用到自动泊车领域,通过快速、准确地提供车位检测结果,使车辆的自动泊车功能为用户提供更具有安全性和可靠性的自动泊车服务。因此,当获得车位检测结果之后,本实施例提供的方法还可以进一步包括:
步骤306:将车位检测结果发送给车辆的自动泊车辅助模块,以使自动泊车辅助模块根据车位检测结果规划泊车路径和进行泊车控制。
当车位检测结果中存在空车位或者符合预设要求的空车位时,相当于为自动泊车辅助模块提供了泊入位置。因此,自动泊车辅助模块可以利用有效的泊入位置建立更加光滑的泊车路径,按照该泊车路径进行泊车控制即可将车辆顺利泊入空车位或者符合预设要求的 空车位。
泊车控制可以具体包括:控制车辆的档位、车轮转速等。此处对泊车控制的具体实现内容不进行限定。
基于前述实施例提供的车位检测方法,相应地,本申请还提供一种车位检测装置。下面结合实施例进行说明。
装置实施例
参见图4,该图为本申请实施例提供的一种车位检测装置的结构示意图。
如图4所示,该车位检测装置,包括:
俯视图像获取模块401,用于获得场景的俯视图像;
概率图获取模块402,用于对俯视图像进行学习,获得堆叠网络模型输出的一幅中心点概率图和四幅内角点概率图;中心点概率图包括:俯视图像中各像素点作为中心点的预测概率;内角点概率图包括:俯视图像中各像素点作为内角点的预测概率,不同的内角点概率图对应车位的不同类型的内角点;
车位检测结果获取模块403,用于根据中心点概率图和四幅内角点概率图获得对俯视图像的车位检测结果。
这五幅概率图包括一幅中心点概率图和四幅内角点概率图。由于中心点概率图反映出场景俯视图像中各像素点作为车位中心点的预测概率,每幅内角点概率图反映出场景俯视图像中各像素点作为车位某一内角点的预测概率,并且每个车位的内角点和中心点存在几何联系(位置联系)。基于此,利用五幅概率图像获得车位检测结果。相比于现有技术,本申请通过学习场景的俯视图像,单阶段地实现了车位的快速检测。此外,该装置进行车位检测不受车位类型的限制,因此可以同时对多种类型的车位进行检测,提升了车位检测效率。
可选地,车位检测结果获取模块403,具体包括:
第一确定单元,用于根据所述中心点概率图确定中心点的数量和位置信息;
第二确定单元,用于从所述四幅内角点概率图中确定出与中心点相关的内角点;
检测结果获取单元,用于利用确定出的中心点的数量和位置信息、所述与中心点相关的内角点的位置信息以及中心点与内角点的关联关系,获得对所述俯视图像的车位检测结果。
可选地,第一确定单元具体用于:
对所述中心点概率图中的每一个像素点,判断其作为中心点的预测概率是否超过第一预设概率阈值,如果是,则将该像素点确定为中心点,并对中心点的数量加1;
按照所述图像坐标系与场景坐标系的转换关系,将作为中心点的像素点在图像坐标系的坐标转换到所述场景坐标系中,得到中心点在所述场景坐标系中的位置信息。
可选地,第一确定单元具体用于:
根据所述中心点概率图并结合极大值抑制算法确定中心点。
可选地,车位检测结果获取模块403具体用于:
获得所述场景中所有车位的数量和位置;或者,
获得所述场景中所有空车位的数量和位置;或者,
获得所述场景中符合预设要求的空车位的数量和位置。
可选地,俯视图像获取模块401,具体包括:
初步图像获取单元,用于利用车辆上装设的摄像装置获得所述场景的初步图像;
图像拼接单元,用于将所述场景的初步图像拼接为全景图像;
图像转化单元,用于将所述全景图像转化为所述俯视图像。
车位检测结果可以应用到自动泊车领域,通过快速、准确地提供车位检测结果,使车辆的自动泊车功能为用户提供更具有安全性和可靠性的自动泊车服务。因此,可选地,前述的车位检测装置,还包括:
发送模块404,用于将所述车位检测结果发送给车辆的自动泊车辅助模块,以使所述自动泊车辅助模块根据所述车位检测结果规划泊车路径和进行泊车控制。
基于前述实施例提供的车位检测方法和车位检测装置,相应地,本申请还提供一种实现车位检测的设备,包括:处理器以及存储器:
其中,存储器用于存储计算机程序;
处理器用于根据存储器中存储的计算机程序执行方法实施例提供的车位检测方法中部分或全部步骤。
另外,基于前述实施例提供的车位检测方法、车位检测装置及设备,相应地,本申请还进一步提供一种计算机可读存储介质。该计算机可读存储介质用于存储计算机程序,计算机程序用于执行前述方法实施例提供的车位检测方法的部分或全部步骤。
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备及系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的设备及系统实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元提示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述,仅为本申请的一种具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (10)

  1. 一种车位检测方法,其特征在于,包括:
    获得场景的俯视图像;
    对所述俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图;所述中心点概率图包括:所述俯视图像中各像素点作为中心点的预测概率;所述内角点概率图包括:所述俯视图像中各像素点作为内角点的预测概率,不同的所述内角点概率图对应车位的不同类型的内角点;
    根据所述中心点概率图和所述四幅内角点概率图获得对所述俯视图像的车位检测结果。
  2. 根据权利要求1所述的车位检测方法,其特征在于,所述根据所述中心点概率图和所述四幅内角点概率图获得对所述俯视图像的车位检测结果,具体包括:
    根据所述中心点概率图确定中心点的数量和位置信息;
    从所述四幅内角点概率图中确定出与中心点相关的内角点;
    利用确定出的中心点的数量和位置信息、所述与中心点相关的内角点的位置信息以及中心点与内角点的关联关系,获得对所述俯视图像的车位检测结果。
  3. 根据权利要求2所述的车位检测方法,其特征在于,所述根据所述中心点概率图确定中心点的数量和位置信息,具体包括:
    对所述中心点概率图中的每一个像素点,判断其作为中心点的预测概率是否超过第一预设概率阈值,如果是,则将该像素点确定为中心点,并对中心点的数量加1;
    按照所述图像坐标系与场景坐标系的转换关系,将作为中心点的像素点在图像坐标系的坐标转换到所述场景坐标系中,得到中心点在所述场景坐标系中的位置信息。
  4. 根据权利要求2所述的车位检测方法,其特征在于,所述根据所述中心点概率图确定中心点,具体包括:
    根据所述中心点概率图并结合极大值抑制算法确定中心点。
  5. 根据权利要求1所述的车位检测方法,其特征在于,所述获得对所述俯视图像的车位检测结果,具体包括:
    获得所述场景中所有车位的数量和位置;或者,
    获得所述场景中所有空车位的数量和位置;或者,
    获得所述场景中符合预设要求的空车位的数量和位置。
  6. 根据权利要求1所述的车位检测方法,其特征在于,所述获得场景的俯视图像,具体包括:
    利用车辆上装设的摄像装置获得所述场景的初步图像;
    将所述场景的初步图像拼接为全景图像;
    将所述全景图像转化为所述俯视图像。
  7. 根据权利要求1-6任一项所述的车位检测方法,其特征在于,还包括:
    将所述车位检测结果发送给车辆的自动泊车辅助模块,以使所述自动泊车辅助模块根据所述车位检测结果规划泊车路径和进行泊车控制。
  8. 一种车位检测装置,其特征在于,包括:
    俯视图像获取模块,用于获得场景的俯视图像;
    概率图获取模块,用于对所述俯视图像进行学习,获得一幅中心点概率图和四幅内角点概率图;所述中心点概率图包括:所述俯视图像中各像素点作为中心点的预测概率;所述内角点概率图包括:所述俯视图像中各像素点作为内角点的预测概率,不同的所述内角点概率图对应车位的不同类型的内角点;
    车位检测结果获取模块,用于根据所述中心点概率图和所述四幅内角点概率图获得对所述俯视图像的车位检测结果。
  9. 一种设备,其特征在于,包括:处理器以及存储器:
    所述存储器用于存储计算机程序;
    所述处理器用于根据所述计算机程序执行权利要求1-7中任一项所述的车位检测方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于执行权利要求1-7中任一项所述的车位检测方法。
PCT/CN2020/102516 2020-03-20 2020-07-17 一种车位检测方法、装置、设备及存储介质 WO2021184616A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022555182A JP7400118B2 (ja) 2020-03-20 2020-07-17 駐車スペース検出方法、装置、デバイス及び記憶媒体
DE112020006935.4T DE112020006935T5 (de) 2020-03-20 2020-07-17 Verfahren und gerät zur parkplatzerkennung sowie vorrichtung und speichermedium
US17/911,406 US20230102253A1 (en) 2020-03-20 2020-07-17 Parking space detection method and apparatus, and device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010200852.0 2020-03-20
CN202010200852.0A CN111428616B (zh) 2020-03-20 2020-03-20 一种车位检测方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021184616A1 true WO2021184616A1 (zh) 2021-09-23

Family

ID=71548343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102516 WO2021184616A1 (zh) 2020-03-20 2020-07-17 一种车位检测方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US20230102253A1 (zh)
JP (1) JP7400118B2 (zh)
CN (1) CN111428616B (zh)
DE (1) DE112020006935T5 (zh)
WO (1) WO2021184616A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598922B (zh) * 2020-12-07 2023-03-21 安徽江淮汽车集团股份有限公司 车位检测方法、装置、设备及存储介质
CN112836633A (zh) * 2021-02-02 2021-05-25 蔚来汽车科技(安徽)有限公司 停车位检测方法以及停车位检测系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740584A (zh) * 2019-04-02 2019-05-10 纽劢科技(上海)有限公司 基于深度学习的自动泊车停车位检测方法
CN110706509A (zh) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 车位及其方向角度检测方法、装置、设备及介质
CN110796063A (zh) * 2019-10-24 2020-02-14 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5443886B2 (ja) * 2009-07-31 2014-03-19 クラリオン株式会社 駐車空間認識装置
DE102011013681A1 (de) * 2011-03-11 2012-09-13 Valeo Schalter Und Sensoren Gmbh Verfahren zum Detektieren einer Parklücke, Parkhilfesystem und Kraftfahrzeug mit einem Parkhilfesystem
US9129524B2 (en) * 2012-03-29 2015-09-08 Xerox Corporation Method of determining parking lot occupancy from digital camera images
KR102176773B1 (ko) * 2014-06-11 2020-11-09 현대모비스 주식회사 자동차의 주차시스템
US10268201B2 (en) * 2017-02-28 2019-04-23 Mitsubishi Electric Research Laboratories, Inc. Vehicle automated parking system and method
CN110400255B (zh) * 2018-04-25 2022-03-15 比亚迪股份有限公司 车辆全景影像的生成方法、系统和车辆
CN108564814B (zh) * 2018-06-06 2020-11-17 清华大学苏州汽车研究院(吴江) 一种基于图像的停车场车位检测方法及装置
CN109243289B (zh) * 2018-09-05 2021-02-05 武汉中海庭数据技术有限公司 高精度地图制作中地下车库停车位提取方法及系统
CN109508682A (zh) * 2018-11-20 2019-03-22 成都通甲优博科技有限责任公司 一种全景停车位的检测方法
CN109614914A (zh) * 2018-12-05 2019-04-12 北京纵目安驰智能科技有限公司 车位顶点定位方法、装置和存储介质
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置
CN110348297B (zh) * 2019-05-31 2023-12-26 纵目科技(上海)股份有限公司 一种用于识别立体停车库的检测方法、系统、终端和存储介质
CN110766979A (zh) * 2019-11-13 2020-02-07 奥特酷智能科技(南京)有限公司 一种用于自动驾驶车辆的泊车车位检测方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740584A (zh) * 2019-04-02 2019-05-10 纽劢科技(上海)有限公司 基于深度学习的自动泊车停车位检测方法
CN110706509A (zh) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 车位及其方向角度检测方法、装置、设备及介质
CN110796063A (zh) * 2019-10-24 2020-02-14 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆

Also Published As

Publication number Publication date
JP7400118B2 (ja) 2023-12-18
JP2023517365A (ja) 2023-04-25
US20230102253A1 (en) 2023-03-30
CN111428616A (zh) 2020-07-17
DE112020006935T5 (de) 2023-01-26
CN111428616B (zh) 2023-05-23

Similar Documents

Publication Publication Date Title
CN110163930B (zh) 车道线生成方法、装置、设备、系统及可读存储介质
EP3627109B1 (en) Visual positioning method and apparatus, electronic device and system
WO2022110049A1 (zh) 一种导航方法、装置和系统
US20240092344A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
KR101854554B1 (ko) 건축물 높이 산출 방법, 장치 및 저장 매체
JP2020047276A (ja) センサーキャリブレーション方法と装置、コンピュータ機器、媒体及び車両
US20230215187A1 (en) Target detection method based on monocular image
WO2021184616A1 (zh) 一种车位检测方法、装置、设备及存储介质
CN111291650A (zh) 自动泊车辅助的方法及装置
WO2023221566A1 (zh) 一种基于多视角融合的3d目标检测方法及装置
WO2020181426A1 (zh) 一种车道线检测方法、设备、移动平台及存储介质
CN112700486B (zh) 对图像中路面车道线的深度进行估计的方法及装置
CN111860072A (zh) 泊车控制方法、装置、计算机设备及计算机可读存储介质
CN110109465A (zh) 一种自导引车以及基于自导引车的地图构建方法
CN113850136A (zh) 基于yolov5与BCNN的车辆朝向识别方法及系统
CN111161334A (zh) 一种基于深度学习的语义地图构建方法
WO2024093641A1 (zh) 多模态融合的高精地图要素识别方法、装置、设备及介质
CN114399737A (zh) 一种道路检测方法、装置、存储介质及电子设备
CN111964665B (zh) 基于车载环视图像的智能车定位方法、系统及存储介质
CN113378605B (zh) 多源信息融合方法及装置、电子设备和存储介质
CN115866229B (zh) 多视角图像的视角转换方法、装置、设备和介质
WO2023155580A1 (zh) 一种对象识别方法和装置
CN116105721A (zh) 地图构建的回环优化方法、装置、设备及存储介质
WO2023029123A1 (zh) 一种顶点坐标的检测方法、装置、设备及存储介质
CN114386481A (zh) 一种车辆感知信息融合方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926255

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022555182

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 20926255

Country of ref document: EP

Kind code of ref document: A1