WO2021068588A1 - 车位及其方向角度检测方法、装置、设备及介质 - Google Patents

车位及其方向角度检测方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021068588A1
WO2021068588A1 PCT/CN2020/102517 CN2020102517W WO2021068588A1 WO 2021068588 A1 WO2021068588 A1 WO 2021068588A1 CN 2020102517 W CN2020102517 W CN 2020102517W WO 2021068588 A1 WO2021068588 A1 WO 2021068588A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
image
parking
candidate
real
Prior art date
Application number
PCT/CN2020/102517
Other languages
English (en)
French (fr)
Inventor
庞飞
吕晋
周婷
Original Assignee
东软睿驰汽车技术(沈阳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东软睿驰汽车技术(沈阳)有限公司 filed Critical 东软睿驰汽车技术(沈阳)有限公司
Priority to EP20874660.2A priority Critical patent/EP4044146A4/en
Priority to US17/767,919 priority patent/US20240092344A1/en
Priority to JP2022521761A priority patent/JP7414978B2/ja
Publication of WO2021068588A1 publication Critical patent/WO2021068588A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • G08G1/143Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces inside the vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/168Driving aids for parking, e.g. acoustic or visual feedback on parking space
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Definitions

  • This application relates to the field of computer technology, in particular to a parking space and its direction and angle detection method, device, equipment, vehicle, and computer-readable storage medium.
  • parking space detection plays an important role in subsequent vehicle path planning, tracking, and accurate parking space. Parking space detection is the most important part of the automatic parking system.
  • Existing parking space detection algorithms can be roughly divided into four categories: user interface-based, infrastructure-based, free-space-based, and parking-space marking-based methods. Among them, the method based on parking space marking does not rely on the existence of neighboring vehicles, but on the parking space marking, so it can more accurately identify the parking space.
  • this type of algorithm generally detects the corner points of the parking space first, and then determines the type of parking space and the angle of the parking space.
  • multiple tasks processed after corner detection are realized by multiple methods or multiple network cascades. This cascading method requires too much energy consumption and low availability in the vehicle embedded environment.
  • This application provides a parking space and its direction angle detection method.
  • the method integrates the two tasks of parking space category detection and parking space direction angle detection into a network for joint training to obtain a parking space detection model, and detects the image to be detected through the model. It is determined whether the candidate parking space image identifies the real parking space and the direction angle of the real parking space, which reduces the cost of energy consumption and has high usability.
  • This application also provides corresponding devices, equipment, vehicles, computer-readable storage media, and computer program products.
  • the first aspect of the present application provides a parking space and a method for detecting its direction and angle.
  • the method includes:
  • the candidate parking space image is detected through a pre-trained parking space detection model to obtain a parking space detection result.
  • the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space.
  • the detection result of the parking space also includes the direction angle of the parking space.
  • the second aspect of the present application provides a parking space and its direction angle detection, and the device includes:
  • the acquisition module is used to acquire the image to be detected
  • An identification module configured to identify corner points of parking spaces in the image to be detected, and crop the image to be detected according to the corner points of the parking spaces to obtain candidate parking space images
  • the detection module is used to detect the candidate parking space image through a pre-trained parking space detection model to obtain a parking space detection result, and the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space, when the candidate parking space image When the real parking space is identified, the detection result of the parking space also includes the direction angle of the parking space.
  • the third aspect of the present application provides a device, the device including a processor and a memory:
  • the memory is used to store a computer program
  • the processor is configured to call the computer program in the memory to execute the parking space and the direction and angle detection method described in the first aspect.
  • a fourth aspect of the present application provides a vehicle, the vehicle including a parking system and a controller;
  • the parking system is configured to execute the parking space and its direction angle detection method as described in the first aspect to determine a parking space that can be parked, and the parking space is determined according to a candidate parking space image identified as a real parking space according to a detection result of the parking space;
  • the controller is used for controlling the parking of the vehicle according to the parking space.
  • the fifth aspect of the present application provides a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and when the computer program is executed by a processor, the parking space and its direction angle as described in the first aspect are realized Detection method.
  • the sixth aspect of the present application provides a computer program product, which, when executed on a data processing device, is suitable for executing a program that initializes the parking space and the direction and angle detection method described in the first aspect.
  • the embodiment of the application provides a model-based parking space and its direction angle detection method.
  • the method combines the two tasks of determining the parking space type and determining the direction angle of the parking space into a network model for joint training to obtain a parking space detection model.
  • Obtain the image to be detected identify the corners of the parking space in the image to be detected, crop the image to be detected according to the corners of the parking space to obtain the candidate parking space image, directly input the candidate parking space image to the parking space detection model for detection, the parking space detection model can simultaneously
  • the parking space type and parking space direction angle are predicted, and the parking space detection result is obtained.
  • the parking space detection result indicates whether the candidate parking space image identifies the real parking space. When it identifies the real parking space, the direction angle of the parking space is output at the same time. This can avoid the use of two network cascades.
  • the cost of energy consumption caused by parking space classification and parking direction angle detection has high availability.
  • FIG. 1 is a system architecture diagram of a model training method and a parking space and its direction angle detection method in an embodiment of the application;
  • Figure 2 is a flowchart of a model training method in an embodiment of the application
  • Figure 3 is a schematic diagram of a training sample in an embodiment of the application.
  • FIG. 4 is a flowchart of a method for detecting a parking space and its direction angle in an embodiment of the application
  • FIG. 5 is a schematic diagram of a parking space and its direction angle detection method in an embodiment of the application.
  • Fig. 6 is a schematic structural diagram of a model training device in an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of a parking space and its direction angle detection device in an embodiment of the application.
  • FIG. 8 is a schematic structural diagram of a server in an embodiment of the application.
  • this application provides A model-based detection method of parking spaces and their direction angles is proposed.
  • the two tasks of determining the type of parking spaces and determining the direction and angle of the parking spaces are pre-integrated into a network model for training, and the candidate parking images are cropped according to the corner points in the image to be detected.
  • the model training method can be applied to any processing device with image processing capabilities, and the processing device can specifically be a device with a central processing unit (CPU) and/or a graphics processing unit (GPU),
  • the processing device may specifically be a terminal, where the terminal includes, but is not limited to, a personal computer (PC) or a workstation, etc.
  • the processing device may also be a server, and the processing device may exist alone or in the form of a cluster. exist.
  • the model-based parking space and its direction angle detection method is mainly applied to the parking system of the vehicle.
  • the above-mentioned model training method can be stored in a processing device in the form of a computer program, and the processing device implements model training by running a computer program.
  • the above-mentioned model-based parking space and its direction angle detection method can also be stored in the form of a computer program in the parking of a vehicle.
  • the parking system realizes parking space detection by running a computer program.
  • the computer program may be independent, or may be a functional module, plug-in, or small program integrated on other programs.
  • model training method and the model-based parking space and direction angle detection method provided in this application include but are not limited to being applied to the application environment as shown in FIG. 1.
  • the server 101 and the vehicle parking system 102 are connected through a network such as a wireless communication network.
  • the server 101 is also connected to a sample database 103, so that the server 101 can obtain training samples from the sample data 103, and the training samples include parking spaces.
  • the sample image and its labeling information, the labeling information includes a parking space location and a parking space label, where the parking space label is used to characterize whether the parking space sample image identifies a real parking space and the parking space direction angle corresponding to the real parking space.
  • the server 101 uses the training samples to train a parking space detection model.
  • the parking space detection model takes a candidate parking space image as input and a parking space detection result as an output.
  • the parking space detection result is specifically used to characterize whether the candidate parking space image identifies a real parking space and the The parking space direction angle of the real parking space.
  • the parking space detection model can extract features from the parking space sample images in the training samples, and then perform parking space detection based on the extracted image features.
  • the result is compared with the parking space label corresponding to the parking space sample image in the training sample, and the loss function of the parking space detection model is calculated according to the comparison result, and the parameters of the parking space detection model are updated based on the loss function.
  • the server 101 may stop training, and use the parking space model at this time for detection of the parking space and its direction angle.
  • the server 101 sends the model parameters of the parking space detection model to the parking system 102 of the vehicle, so that the parking system 102 can use the parking space detection model to detect the parking space.
  • the parking system 102 obtains the image to be detected according to the image obtained by shooting the parking space of the parking system, then recognizes the corner of the parking space in the image to be detected, and crops the image to be detected according to the corner of the parking space to obtain the candidate parking image,
  • the candidate parking space image is detected by the parking space detection model to obtain the parking space detection result, wherein, when the parking space detection result indicates that the candidate parking space image is a real parking space, the parking space detection result also includes the parking space direction angle of the real parking space, and then
  • the parking system 102 may also display the above-mentioned parking space detection result through a display panel.
  • the method includes:
  • the training sample includes a parking space sample image and its labeling information.
  • the labeling information includes a parking space position and a parking space label corresponding to the parking space sample image.
  • the parking space label is used to identify whether the parking space sample image identifies a real parking space and the parking space direction corresponding to the real parking space. Angle, in other words, for a positive sample, the parking space label is used to identify that the sample parking space image is a real parking space and the direction angle of the real parking space. For a negative sample, the parking space label is used to identify that the sample parking space image is not a real parking space.
  • the parking space sample image refers to the image including the candidate parking space extracted from the surround view mosaic
  • the surround view mosaic specifically refers to the vehicle's front and rear facing cameras (for example, 4 cameras) captured by multiple cameras (for example, 4 cameras) of the parking space image stitching.
  • the corners of the parking spaces in the surround mosaic can be identified first.
  • the corners of the parking spaces can be identified through image grayscale, edge detection or machine learning, and then the parking entry line can be constructed based on the corners of the parking spaces.
  • the parking space separation lines respectively connected to the two end points of the parking space entrance line can be determined, and the surround-view stitching image is cropped based on the parking space entrance line and the parking space separation line to obtain parking space sample images.
  • each parking space sample image can represent the parking space location through the above parking space entrance line and the two parking space separation lines connected to the parking space entrance line.
  • the two parking space separation lines connected to the parking space entrance line can be connected to the parking space entrance.
  • the two parking space separation lines connected to the parking space entrance line can also be two that are not perpendicular to the parking space entrance line.
  • the parking space separation line, so, the parking space corresponding to the parking space is an oblique train space.
  • the above-mentioned surround view stitched image can be cropped to obtain multiple candidate parking space images.
  • the sample with the real parking space marked by the parking space image is the positive sample
  • the sample with the non-real parking space marked by the parking space image is the negative sample.
  • the parking space detection model to be trained in the example is to identify whether the parking space image is a real parking space, and when the recognition result is a real parking space, output the parking space direction angle model. Based on this, the positive sample is also marked with the parking space direction angle.
  • the direction angle may specifically be the angle between the central axis of the vehicle and the entrance line of the parking space, or the angle between the driving direction of the vehicle and the entrance line of the parking space, and so on.
  • the above-mentioned labeling information is labelled in the surround-view mosaic, and then converted to the parking space image by means of coordinate conversion or the like.
  • the training samples can be obtained by labeling and cutting the surround view mosaic.
  • the parking spaces at the left corner points 1, 2, 3, and 4 of the surround view mosaic can be marked.
  • the line segment corresponding to the left corner points 1, 2 is the parking space entrance line
  • the left corner point 1, Line segment 4, the left corner point 2, and line segment 3 are the parking space separation line to mark the parking space position, mark the parking space label as the real parking space, and mark the direction angle of the parking space, and crop the corresponding image from the surround view mosaic map, so you can get A positive sample.
  • the parking spaces at the corner points 1, 2, 3, and 4 on the right side of the parking space image can be marked, where the corresponding line segments of the corner points 1, 2 are the parking space entrance lines, the line segments of the corner points 1, 4, and the corner points 2,
  • the 3 line segment is the parking space separation line, and the parking space label is marked as the real parking space, and the parking space direction angle is marked at the same time, and then the corresponding image is cropped from the surrounding mosaic map, so that another positive sample can be obtained.
  • the parking space label can also identify the parking space type when the parking space label identifies the parking space location as a real parking space.
  • the parking space label can specifically identify the real parking space.
  • the corresponding parking space types are vertical parking spaces, parallel parking spaces or oblique train spaces. As shown in Figure 3, the parking spaces at the left corners 1, 2, 3, and 4 of the image can also be marked as vertical parking spaces, and the parking spaces at the right corners 1, 2, 3, and 4 of the image are marked as horizontal parking spaces.
  • the parking space label can be used to identify the parking space sample image to identify the real parking space or not the real parking space.
  • the model trained by the training sample is mainly used to predict whether the candidate parking space image identifies the real parking space, and the output probabilities P 0 , P 1 Respectively represent the probability of being a real parking space and not being a real parking space.
  • parking space tags can also be used to identify parking space sample images to identify horizontal parking spaces, vertical parking spaces, oblique train spaces, or not real parking spaces, etc.
  • the model trained through this training sample is mainly used to predict candidate parking spaces
  • the image is to identify horizontal parking spaces, vertical parking spaces, oblique train spaces or not real parking spaces, and the output probabilities P 0 , P 1 , P 2 and P 3 respectively represent the probabilities of horizontal parking spaces, vertical parking spaces, oblique train spaces or not real parking spaces.
  • S202 Use the training sample to train a parking space detection model, and stop training when the loss function of the parking space detection model meets the training end condition.
  • the parking space detection model takes a candidate parking space image marked with a parking space position as input and a parking space detection result as an output.
  • the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space.
  • the detection result of the parking space also includes the direction angle of the parking space.
  • the server may determine the loss function of the parking space detection model according to the classification loss and the regression loss, where the classification loss is used to measure the loss caused by predicting the real parking space, and the regression loss is used to measure the loss caused by the predicted direction angle of the parking space.
  • the server can obtain the loss function of the parking space detection model by weighting the classification loss and the regression loss.
  • the classification result represents the candidate parking space image identifying the unreal parking space
  • no regression loss will be generated
  • the weight of the regression loss Can be set to 0.
  • the classification result characterizes the candidate parking space image to identify the real parking space
  • regression analysis is needed to generate the parking space direction angle.
  • the loss part also includes the regression loss.
  • the weight can be set according to actual needs. As an example, it can be set to 1.
  • the classification loss can use softmax and other classification loss functions, and the regression loss can use the regression loss function L_regression.
  • it can be the absolute value loss L1loss, the square loss L2loss or the smoothed average absolute error loss Huber loss. .
  • the server uses a joint training method to train the parking space detection model, that is, parking space category detection and parking space direction angle detection jointly use a loss function, which is expressed by the following formula:
  • the loss function is affected by the classification loss and regression loss.
  • the loss function is affected by the classification loss, which can be expressed as:
  • the parking space detection model can be a neural network model.
  • the parking space detection model can be a network model based on AlexNet, VGGNet, GoogleNet or MobileNet.
  • the parking space detection model can also be modified based on the above network. network of.
  • training the parking space detection model using training samples is actually a process of iteratively updating the model parameters using the training samples.
  • the loss function of the parking space detection model reflects the degree of deviation between the predicted value and the true value. Therefore, when the parking space detection model is When the loss function tends to converge, it indicates that there is little room for further optimization of the parking space detection model, which can be regarded as meeting the training end condition, and the server can stop training.
  • the loss function of the parking space detection model is less than the preset threshold , Can also be regarded as meeting the training end conditions, and the server can stop training.
  • this embodiment of the application provides a model training method, which combines the two tasks of determining the parking space type and determining the direction and angle of the parking space into a network model for joint training, specifically for the parking space sample image, through the parking space entrance
  • the parking space is marked by the parking space line and the two parking space separation lines respectively connected to the two end points of the parking space entrance line, and the parking space label is used to identify whether the parking space sample image is a real parking space, and when the real parking space is identified, the parking space direction angle is also marked,
  • the fusion of the training samples corresponding to the above two tasks is realized, and then the fused training samples are used to jointly train the parking space detection model.
  • the loss function includes classification loss and regression loss.
  • the weight of the regression loss is set to 0, so that the two tasks can be merged into a network model for model training. Improved computing performance and training efficiency, with high availability.
  • the method includes:
  • S401 Acquire an image to be detected.
  • the camera when a vehicle enters a parking lot or other places where it can be parked, the camera can be triggered to perform a camera operation through the parking system.
  • the parking system can stitch multiple cameras on the front, rear, left, and right of the vehicle to capture images of the parking space.
  • the picture is the image to be detected.
  • S402 Identify the corner points of the parking space in the image to be detected, and crop the image to be detected according to the corner points of the parking space to obtain a candidate parking space image.
  • the corners of parking spaces are generally identified by T-shaped marks or L-shaped marks.
  • the parking system of the vehicle can use machine learning to extract the corner features of the parking spaces to identify the corners of the parking spaces in the image to be detected, or
  • the corner points of the parking space are identified by means of edge detection, image grayscale, and the like.
  • the parking system can determine all possible parking space entry lines according to the parking space corner points, and then determine the parking space separation line based on the parking space entrance line and the parking space The separation line can be cropped to obtain the candidate parking image.
  • the candidate parking space images obtained by cropping are different. Based on this, the cropping is performed on one surround view stitched image, and multiple candidate parking space images can be obtained.
  • S403 Detect the candidate parking space image through a pre-trained parking space detection model to obtain a parking space detection result.
  • the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space, and when the candidate parking space image identifies a real parking space, the parking space detection result further includes a parking space direction angle.
  • the parking space detection model is trained through the model training method provided in the embodiment shown in FIG. 2, specifically, training is performed using training samples, and training is stopped when the loss function of the parking space detection model meets the training end condition,
  • the model thus obtained can be used as a parking space detection model for the parking system to perform parking space detection.
  • the parking space detection model includes a feature extraction layer, a classification layer, and a regression layer.
  • the feature extraction layer is used to extract the features of the candidate parking image to obtain the image features corresponding to the candidate parking image, and then based on the model training stage According to the learned hidden layer mapping relationship, the classification layer is used to classify the candidate parking space image according to the image feature.
  • the regression layer is used to perform regression analysis according to the image feature to obtain the parking space direction angle.
  • the classification result When the candidate parking space image identifies an unreal parking space, the classification result is directly used as the parking space detection result.
  • the parking system can display the detection result through a display panel, so that the user can park according to the displayed detection result. It should be noted that the user can manually park the vehicle according to the detection result, or the parking system can send the detection result to the controller, and the controller can control the vehicle to park automatically.
  • the parking space detection result is specifically used to identify that the parking space position is a vertical parking space, a parallel parking space Parking space, diagonal train space, or not real parking space.
  • the parking system displays the parking space detection result through the display panel, the designated type of parking space can be displayed in a highlighted form, and the designated type of parking space can specifically be any one of a vertical parking space, a horizontal parking space, or an oblique train space.
  • Figure 5 shows a schematic diagram of inputting a candidate parking space image into a parking space detection model to obtain a parking space detection result.
  • the input candidate parking space image includes a T-shaped mark that specifically identifies the corner point of the parking space.
  • the parking space image is marked with the parking space entrance line determined based on the corner point of the parking space, and the parking space separation line respectively connected to the two end points of the parking space entrance line.
  • the above parking space entrance line and parking space can be distinguished by different colors or line types.
  • the parking space detection model can determine whether the parking space identified by the parking space entrance line and the parking space separation line is a real parking space by identifying candidate parking space images, and if so, further output the parking space direction angle.
  • the embodiment of the present application provides a parking space and its direction angle detection method.
  • the method uses a pre-trained parking space detection model to recognize parking spaces. Since the parking space type and the parking space direction angle are integrated into a deep neural network, the improvement Improved computing performance.
  • the angle and parking space type detection are carried out at the same time, so the parking space classification network only needs to determine whether the parking space is a parking space, and there is no need to determine the type of the parking space.
  • the parking space direction angle is used to assist in determining the type of the parking space.
  • the embodiment shown in FIG. 4 uses a parking system as an example.
  • the parking system may also send the image to be detected to the server, and the server will perform the check on the parking space and its direction and angle. Perform testing.
  • the embodiment of the application also provides a corresponding model training device and a parking space detection device.
  • the following is a detailed introduction from the perspective of functional modularity.
  • the device 600 includes:
  • the obtaining module 610 is configured to obtain training samples, the training samples including parking space sample images and labeling information thereof, the labeling information including parking space positions and parking space labels, the parking space label is used to identify whether the parking space sample image identifies a real parking space And the direction angle of the parking space corresponding to the real parking space;
  • the training module 620 is configured to train a parking space detection model using the training samples, and stop training when the loss function of the parking space detection model satisfies the training end condition.
  • the parking space detection model takes the candidate parking space image marked with the parking space position as input, The parking space detection result is used as an output, where the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space, and when the candidate parking space image identifies a real parking space, the parking space detection result also includes a parking space direction angle.
  • the regression loss is determined by any loss function of square loss, absolute value loss, or smoothed average absolute error loss.
  • the parking space detection model is a network model based on AlexNet, VGGNet, GoogleNet, or MobileNet.
  • the device 700 includes:
  • the acquiring module 710 is configured to acquire the image to be detected;
  • the cropping module 720 is configured to identify corners of parking spaces in the images to be detected, and crop the images to be detected according to the corners of the parking spaces to obtain candidate parking images;
  • the detection module 730 is configured to detect the candidate parking space image through a pre-trained parking space detection model to obtain a parking space detection result.
  • the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space.
  • the detection result of the parking space also includes the direction angle of the parking space.
  • the parking space detection model includes a feature extraction layer, a classification layer, and a regression layer;
  • the detection module 730 is specifically configured to:
  • the regression layer is used to perform regression analysis according to the image feature to obtain the parking space direction angle.
  • the loss function of the parking space detection model is determined based on classification loss and regression loss, the classification loss is used to measure the loss generated by predicting the real parking space, and the regression loss is used to measure the loss generated by predicting the direction angle of the parking space .
  • the loss function of the parking space detection model is the weighted sum of the classification loss and the regression loss, and when the classification result of the classification layer indicates that the candidate parking space image identifies an unreal parking space, the regression loss The weight is 0.
  • the parking space detection model is obtained through training with training samples, the training samples include parking space sample images and their label information, and the label information includes parking space positions and parking space labels corresponding to the parking space sample images, and the parking space labels are used for Identify whether the parking space sample image identifies a real parking space and the parking space direction angle corresponding to the real parking space.
  • the parking space label when the parking space label identifies that the parking space sample image identifies a real parking space, the parking space label is specifically used to identify the parking space sample image to identify a vertical parking space, a parallel parking space, and an oblique train space.
  • the parking space position is characterized by a parking space entrance line and two parking space separation lines respectively connected to two end points of the parking space entrance line.
  • the distance from the midpoint of the parking space entrance line to the two sides of the parking space sample image that are parallel or coincident with the parking space separation line is equal.
  • the embodiment of the present application also provides a device for realizing the parking space and the direction angle detection method of the present application.
  • the device may specifically be a server, which will be introduced below from the perspective of hardware materialization.
  • the server 800 may have relatively large differences due to different configurations or performances, and may include one or more central processing units (CPU) 822 (for example, one or more processors) and The memory 832, one or more storage media 830 (for example, one or one storage device with a large amount of storage) storing application programs 842 or data 844.
  • the memory 832 and the storage medium 830 may be short-term storage or persistent storage.
  • the program stored in the storage medium 830 may include one or more modules (not shown in the figure), and each module may include a series of command operations on the server.
  • the central processing unit 822 may be configured to communicate with the storage medium 830, and execute a series of instruction operations in the storage medium 830 on the server 800.
  • the server 800 may also include one or more power supplies 826, one or more wired or wireless network interfaces 850, one or more input and output interfaces 858, and/or one or more operating systems 841, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • operating systems 841 such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • the steps performed by the server in the foregoing embodiment may be based on the server structure shown in FIG. 8.
  • the CPU 822 is used to perform the following steps:
  • the candidate parking space image is detected through a pre-trained parking space detection model to obtain a parking space detection result.
  • the parking space detection result is used to characterize whether the candidate parking space image identifies a real parking space.
  • the detection result of the parking space also includes the direction angle of the parking space.
  • the CPU 822 is also configured to execute steps of any one of the methods for detecting parking spaces and their directions and angles provided in the embodiments of the present application.
  • An embodiment of the present application also provides a device, which includes a processor and a memory:
  • the memory is used to store a computer program
  • the processor is used to call the computer program in the memory to execute the parking space and its direction angle detection method as described in this application.
  • An embodiment of the present application also provides a vehicle, the vehicle including a parking system and a controller;
  • the parking system is configured to execute the above-mentioned parking space and its direction and angle detection method to determine a parking space that can be parked, and the parking space is determined according to a candidate parking space image identified as a real parking space according to a detection result of the parking space;
  • the controller is used for controlling the parking of the vehicle according to the parking space.
  • the embodiments of the present application also provide a computer-readable storage medium for storing a computer program, and the computer program is used to execute any one of the methods for detecting parking spaces and directions and angles described in the foregoing embodiments.
  • the embodiments of the present application also provide a computer program product including instructions, which when run on a computer, cause the computer to execute any one of the methods for detecting parking spaces and directions and angles described in the foregoing embodiments.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

一种车位及其方向角度检测方法、装置、设备及计算机可读存储介质,其中,检测方法包括获取待检测图像(S401),识别待检测图像中的车位角点,根据车位角点裁剪待检测图像得到候选车位图像(S402),通过预先训练的车位检测模型对候选车位图像进行检测,得到车位检测结果(S403);车位检测结果用于表征候选车位图像是否标识真实车位,当候选车位图像标识真实车位时,车位检测结果还包括车位方向角度。通过将两个任务融合至一个网络模型并行训练,并基于该模型进行预测,可以避免采用两个网络级联分别进行车位分类和车位方向角度检测导致的性能耗费,具有较高可用性。

Description

车位及其方向角度检测方法、装置、设备及介质
本申请要求于2019年10月12日提交中国专利局、申请号为201910969470.1、发明名称为“车位及其方向角度检测方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种车位及其方向角度检测方法、装置、设备、车辆及计算机可读存储介质。
背景技术
在自动泊车系统中泊车位检测对后续车辆路径规划、跟踪、准确的泊车入位等功能中起到很重要的作用。车位检测是自动泊车系统最重要的组成部分。
现有的车位检测算法大致分为4类:基于用户界面的、基于基础设施的、基于自由空间的和基于车位标记的方法。其中,基于车位标记的方法由于其识别过程不依赖于相邻车辆的存在,而是依赖于停车位标记,所以更能准确识别车位。
目前该类算法一般是先检测车位角点,后续再判断车位类型及车位角度。其中,角点检测之后处理的多个任务是通过多个方法或者多个网络级联实现的。这种级联方式,在车载嵌入式环境中性能耗费太大,可用性不高。
发明内容
本申请提供了一种车位及其方向角度检测方法,该方法通过将车位类别检测和车位方向角度检测两个任务融合到一个网络进行联合训练得到车位检测模型,通过该模型对待检测图像进行检测从而确定候选车位图像是否标识真实车位,以及真实车位的方向角度,降低了性能耗费,具有较高可用性。本申请还提供了对应的装置、设备、车辆、计算机可读存储介质以及计算机程序产品。
本申请第一方面提供了一种车位及其方向角度检测方法,所述方法包括:
获取待检测图像;
识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像;
通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果,所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
本申请第二方面提供了一种车位及其方向角度检测,所述装置包括:
获取模块,用于获取待检测图像;
识别模块,用于识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像;
检测模块,用于通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果,所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
本申请第三方面提供了一种设备,所述设备包括处理器和存储器:
所述存储器用于存储计算机程序;
所述处理器用于调用所述存储器中的所述计算机程序,以执行如第一方面所述的车位及其方向角度检测方法。
本申请第四方面提供了一种车辆,所述车辆包括泊车系统和控制器;
所述泊车系统用于执行如第一方面所述的车位及其方向角度检测方法确定可泊车车位,所述可泊车车位是根据车位检测结果标识为真实车位的候选车位图像确定的;
所述控制器用于根据所述可泊车车位控制车辆泊车。
本申请第五方面提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的车位及其方向角度检测方法。
本申请第六方面提供了一种计算机程序产品,当在数据处理设备上执行时,适于执行初始化有第一方面所述的车位及其方向角度检测方法的程序。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请实施例提供了一种基于模型的车位及其方向角度检测方法,该方法 将确定车位类型与确定车位方向角度这两个任务融合到一个网络模型进行联合训练得到车位检测模型,如此,在获取待检测图像,识别待检测图像中的车位角点,根据车位角点裁剪待检测图像得到候选车位图像后,直接将该候选车位图像输入至车位检测模型进行检测,该车位检测模型能够同时对车位类型和车位方向角度进行预测,得到车位检测结果,该车位检测结果表征候选车位图像是否标识真实车位,当其标识真实车位时,同时输出车位方向角度,如此可以避免采用两个网络级联分别进行车位分类和车位方向角度检测导致的性能耗费,具有较高可用性。
附图说明
为了更清楚的说明本发明实施例或现有技术的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中一种模型训练方法和车位及其方向角度检测方法的系统架构图;
图2为本申请实施例中一种模型训练方法的流程图;
图3为本申请实施例中一种训练样本的示意图;
图4为本申请实施例中一种车位及其方向角度检测方法的流程图;
图5为本申请实施例中一种车位及其方向角度检测方法的示意图;
图6为本申请实施例中一种模型训练装置的结构示意图;
图7为本申请实施例中一种车位及其方向角度检测装置的结构示意图;
图8为本申请实施例中一种服务器的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第 三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
针对现有的车位检测算法中角点检测之后处理的多个任务通过多个方法或多个网络级联实现,导致在车载嵌入式环境中性能耗费太大,可用性不高的问题,本申请提供了一种基于模型的车位及其方向角度检测方法,将确定车位类型与确定车位方向角度这两个任务预先融合到一个网络模型进行训练,将根据待检测图像中角点裁剪得到的候选车位图像输入该模型同时对车位类型和车位方向角度进行检测,从而解决了车载嵌入式环境中网络级联导致性能耗费太大的问题。
下面将分别从模型训练和模型应用的角度对本申请的技术方案进行介绍。模型训练方法可以应用于任意具有图像处理能力的处理设备,该处理设备具体可以是具有中央处理器(Central Processing Unit/Processor,CPU)和/或图形处理器(Graphics Processing Unit,GPU)的设备,该处理设备具体可以是终端,其中,终端包括但不限于个人计算机(Personal Computer,PC)或者工作站等等,当然处理设备也可以是服务器,该处理设备可以是单独存在,也可以是以集群形式存在。对应地,基于模型的车位及其方向角度检测方法主要应用于车辆的泊车系统。
上述模型训练方法能够以计算机程序的形式存储于处理设备中,处理设备通过运行计算机程序实现模型训练,上述基于模型的车位及其方向角度检测方法也能以计算机程序的形式存储于车辆的泊车系统中,泊车系统通过运行计算机程序实现车位检测。其中,计算机程序可以是独立的,也可以是集成于其他程序之上的功能模块、插件或者小程序等等。
可以理解,本申请提供的模型训练方法以及基于模型的车位及其方向角度检测方法包括但不限于应用于如图1所示的应用环境中。
如图1所示,服务器101和车辆的泊车系统102通过网络如无线通信网进行连接,服务器101还连接有样本数据库103,如此服务器101可以从样本数据103获取训练样本,该训练样本包括车位样本图像及其标注信息,该标注信息包括车位位置和车位标签,其中,车位标签用于表征所述车位样本图像是否标识真实车位以及所述真实车位对应的车位方向角度。
然后服务器101利用训练样本训练车位检测模型,该车位检测模型以候选车位图像为输入,以车位检测结果为输出,其中,车位检测结果具体用于表征所述候选车位图像是否标识真实车位以及所述真实车位的车位方向角度,如此,服务器101将训练样本输入车位检测模型后,车位检测模型能够对训练样本中的车位样本图像进行特征提取,然后基于提取到的图像特征进行车位检测,将车位检测结果与训练样本中车位样本图像对应的车位标签进行比较,根据比较结果计算车位检测模型的损失函数,基于该损失函数更新车位检测模型的参数,当车位检测模型的损失函数满足训练结束条件,如损失函数呈收敛态或者损失函数小于预设阈值时,服务器101可以停止训练,将此时的车位模型用于车位及其方向角度检测。
在训练得到车位检测模型后,服务器101将该车位检测模型的模型参数发送给车辆的泊车系统102,如此泊车系统102可以利用该车位检测模型检测车位。具体的,泊车系统102根据泊车系统拍摄车位所得图像获得待检测图像,然后识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像,通过车位检测模型对候选车位图像进行检测,获得车位检测结果,其中,所述车位检测结果表征所述候选车位图像为真实车位时,所述车位检测结果还包括该真实车位的车位方向角度,然后泊车系统102还可以通过显示面板显示上述车位检测结果。
接下来,分别从服务器和车辆泊车系统的角度对本申请的模型训练方法以及车位及其方向检测方法进行介绍。
参见图2所示的模型训练方法的流程图,该方法包括:
S201:获取训练样本。
所述训练样本包括车位样本图像及其标注信息,所述标注信息包括车位样本图像对应的车位位置和车位标签,其中,车位标签用于标识车位样本图像是 否标识真实车位以及真实车位对应的车位方向角度,换言之,对于正样本,车位标签用于标识车位样本图像是真实车位,并且标识该真实车位的车位方向角度,对于负样本,车位标签用于标识车位样本图像不是真实车位。
其中,车位样本图像是指从环视拼接图中提取的包括候选车位的图像,所述环视拼接图具体是指车辆的前后向摄像头在内的多个摄像头(例如4个摄像头)拍摄车位图像拼接所得图像,针对环视拼接图,可以先识别环视拼接图中的车位角点,如通过图像灰度、边缘检测或者机器学习等方式识别车位角点,然后根据所述车位角点可以构造车位入口线,基于车位入口线以及其他车位角点可以确定出与车位入口线两个端点分别相连的车位分离线,基于车位入口线和车位分离线对所述环视拼接图像进行裁剪可以得到车位样本图像。
在具体实现时,每个车位样本图像可以通过上述车位入口线和与车位入口线相连的两条车位分离线表征车位位置,其中,与车位入口线相连的两条车位分离线可以是与车位入口线垂直的两条车位分离线,如此,该车位位置对应车位为水平车位或者垂直车位,当然,与车位入口线相连的两条车位分离线也可以是与车位入口线相交但不垂直的两条车位分离线,如此,该车位位置对应车位为斜列车位。
需要说明的是,上述环视拼接图像经过裁剪可以得到多个候选的车位图像,其中,车位图像标注真实车位的样本即为正样本,车位图像标注非真实车位的样本即为负样本,本申请实施例所要训练的车位检测模型即是用于识别车位图像是否为真实车位,并在识别结果为真实车位时,输出车位方向角度的模型,基于此,正样本中还标注有车位方向角度,该车位方向角度具体可以是车辆中轴线与车位入口线的夹角,或者是车辆行驶方向与车位入口线的夹角等等。上述标注信息是在环视拼接图中进行标注,然后通过坐标转换等方式转换至车位图像中的。
在实际应用时,为了方便计算和处理,在对环视拼接图进行裁剪得到车位样本图像时,可以根据车位入口线的中点进行裁剪,使的车位入口线的中点到车位样本图像中与车位分离线平行或重合的两条边距离相等,如此,可以避免后续图像处理过程中补偿偏移所带来的额外运算量。
为了方便理解,请参照图3所示的环视拼接图的示意图,通过对环视拼接 图进行标注及裁剪可以得到训练样本。例如,可以对该环视拼接图左侧车位角点1、2、3、4所在车位进行标注,具体地,以左侧角点1、2对应线段为车位入口线,以左侧角点1、4所在线段、左侧角点2、3所在线段为车位分离线标注车位位置,标注车位标签为真实车位,并标注车位方向角度,并将对应图像从环视拼接图裁剪出来,如此可以得到一正样本。又例如,可以对车位图像右侧角点1、2、3、4所在车位进行标注,其中,角点1、2对应线段为车位入口线,角点1、4所在线段、角点2、3所在线段为车位分离线,并标注车位标签为真实车位,同时标注车位方向角度,然后将对应图像从环视拼接图裁剪出来,如此可以得到又一正样本。
需要说明的是,考虑到真实车位可以包括多种车位类型,还可以在车位标签标识所述车位位置为真实车位时,同时标识车位类型,在一个示例中,车位标签具体可以标识所述真实车位对应的车位类型为垂直车位、平行车位或斜列车位。如图3所示,还可以将图像左侧角点1、2、3、4所在车位标识为垂直车位,图像右侧角点1、2、3、4所在车位标识为水平车位。
也就是说,车位标签可以用于标识车位样本图像标识真实车位或者不是真实车位,如此,通过该训练样本训练得到的模型主要用于预测候选车位图像是否标识真实车位,输出概率P 0、P 1分别表征是真实车位和不是真实车位的概率。考虑到真实车位存在不同类型,车位标签也可以用于标识车位样本图像标识水平车位、垂直车位、斜列车位或者不是真实车位等,如此,通过该训练样本训练得到的模型主要用于预测候选车位图像是标识水平车位、垂直车位、斜列车位或者不是真实车位,输出概率P 0、P 1、P 2以及P 3分别表征是水平车位、垂直车位、斜列车位或者不是真实车位的概率。
在图3的示例中,若以车位图像左侧角点1、3所在线段为车位入口线,根据该车位入口线确定与其两个端点分别相连的两条车位分离线,标注对应车位位置为非真实车位,则可以形成一负样本。
S202:利用所述训练样本训练车位检测模型,在所述车位检测模型的损失函数满足训练结束条件时停止训练。
所述车位检测模型以标注有车位位置的候选车位图像为输入,以车位检测结果为输出,其中,车位检测结果用于表征所述候选车位图像是否标识真实车 位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
为了实现车位分类以及车位方向角度定位两个任务同时进行,将两个任务各自对应的损失函数进行融合。具体地,服务器可以根据分类损失和回归损失确定车位检测模型的损失函数,其中,分类损失用于衡量预测真实车位产生的损失,回归损失用于衡量预测车位方向角度所产生的损失。
在具体实现时,服务器可以通过对分类损失和回归损失进行加权运算得到车位检测模型的损失函数,其中,分类结果表征候选车位图像标识非真实车位时,不会产生回归损失,回归损失的权值可以设置为0。分类结果表征候选车位图像标识真实车位时,还需要进行回归分析生成车位方向角度,如此,损失部分还包括回归损失,其权值可以根据实际需要设置,作为一个示例,可以设置为1。在车位检测模型的损失函数中,分类损失可以采用softmax等分类损失函数,回归损失则可以采用回归损失函数L_regression,例如,可以是绝对值损失L1loss、平方损失L2loss或平滑的平均绝对误差损失Huber loss。
具体地,服务器采用联合训练的方法训练车位检测模型,即车位类别检测、车位方向角度检测共同使用一个损失函数,如下公式表示:
loss_detect=α*L_softmax+f(.)*L_regression        (1)
其中,在上述公式(1)损失函数的第二部分损失中,f(.)为输出结果的值域为{0,1},当分类结果表征为真实车位的状态下为f(.)=1,在非真实车位的状态下f(.)=0。
如此,在正样本训练结果下,损失函数受到分类损失和回归损失影响,具体可以参见公式1,在负样本训练结果下,损失函数受到分类损失影响,具体可以表示为:
loss_detect=α*L_softmax         (2)
在实际应用时,车位检测模型具体可以是神经网络模型,例如,车位检测模型可以是基于AlexNet、VGGNet、GoogleNet或者MobileNet的网络模型,当然车位检测模型也可以是基于上述网络做出一些网络层修改的网络。
可以理解,利用训练样本训练车位检测模型实际是利用训练样本迭代更新模型参数的过程,基于此,当车位检测模型的损失函数反映了预测值与真实值 的偏差程度,因此,当车位检测模型的损失函数趋于收敛时,则表明车位检测模型当前进一步优化空间较小,可以视为满足训练结束条件,服务器可以停止训练,当然,在有些情况下,车位检测模型的损失函数小于预设阈值时,也可以视为满足训练结束条件,服务器可以停止训练。
由上可知,本申请实施例提供了一种模型训练方法,该方法将确定车位类型与确定车位方向角度这两个任务融合到一个网络模型进行联合训练,具体地针对车位样本图像,通过车位入口线以及与所述车位入口线两个端点分别相连的两条车位分离线标注车位位置,通过车位标签标识所述车位样本图像是否为真实车位,并在标识真实车位时,还标注车位方向角度,如此实现上述两个任务对应的训练样本的融合,然后利用融合后的训练样本联合训练车位检测模型,考虑到车位检测模型具有两个任务,因此,还需要将车位检测模型的损失函数进行融合,具体地,损失函数包括分类损失和回归损失两部分,当车位标签标识车位位置不是真实车位时,令回归损失的权值为0,如此可以实现两个任务融合到一个网络模型进行模型训练,提高了计算性能和训练效率,具有较高可用性。
接下来,将从泊车系统的角度对本申请实施例提供的车位及其方向角度检测方法进行介绍。参见图4所示的车位及其方向角度检测方法的流程图,该方法包括:
S401:获取待检测图像。
具体地,车辆进入到停车场或其他可以泊车的场所时,可以通过泊车系统触发摄像头执行摄像操作,该泊车系统可以将车辆前后左右的多个摄像头针对车位拍摄图像进行拼接所得环视拼接图作为待检测图像。
S402:识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像。
在停车场等场景中,车位角点一般通过T形mark或L形mark进行标识,基于此,车辆的泊车系统可以通过机器学习提取车位角点特征识别待检测图像中的车位角点,或者通过边缘检测、图像灰度等方式识别所述车位角点。
在识别出所述车位角点后,泊车系统可以根据所述车位角点确定出所有可能的车位入口线,然后根据所述车位入口线确定出车位分离线,基于所述车位 入口线和车位分离线可以裁剪得到候选车位图像。
基于选择的车位入口线不同,裁剪得到的候选车位图像是不同的,基于此,针对一张环视拼接图像进行裁剪,可以得到多张候选车位图像。
S403:通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果。
所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
所述车位检测模型是通过如图2所示实施例提供的模型训练方法训练得到的,具体地,利用训练样本进行训练,并在所述车位检测模型的损失函数满足训练结束条件时停止训练,如此得到的模型即可作为车位检测模型用于泊车系统进行车位检测。
可以理解,车位检测模型包括特征提取层、分类层和回归层,在进行检测时,先通过特征提取层对候选车位图像进行特征提取,得到与候选车位图像对应的图像特征,然后基于模型训练阶段学习到的隐层映射关系,根据该图像特征利用分类层对候选车位图像分类,当分类结果表征候选车位图像标识真实车位时,根据图像特征利用回归层进行回归分析得到车位方向角度,当分类结果表征候选车位图像标识非真实车位时,则直接将分类结果作为车位检测结果。
在具体实现时,泊车系统可通过显示面板显示所述检测结果,以便用户根据显示的检测结果进行泊车。需要说明的是,用户可以根据所述检测结果手动泊车,也可以由泊车系统将上述检测结果发送给控制器,由控制器控制车辆自动泊车。
进一步地,当训练样本的车位标签用于标识所述真实车位对应的车位类型为垂直车位、平行车位或斜列车位时,所述车位检测结果具体用于标识所述车位位置为垂直车位、平行车位、斜列车位或者不是真实车位。如此,泊车系统在通过显示面板显示车位检测结果时,可以高亮形式显示指定类型车位,该指定类型车位具体可以是垂直车位、水平车位或者斜列车位中的任意一种。
图5示出了将候选车位图像输入至车位检测模型得到车位检测结果的示意图,如图5所示,输入的候选车位图像中包括T形标记,该T形标记具体标识车位角点,该候选车位图像中标注有基于该车位角点确定的车位入口线, 以及与车位入口线两个端点分别相连的车位分离线,在实际应用时,可以通过不同颜色或线型区分上述车位入口线和车位分离线,车位检测模型可以通过识别候选车位图像,确定上述车位入口线和车位分离线标识车位是否为真实车位,若是,则进一步输出车位方向角度。
由上可知,本申请实施例提供了一种车位及其方向角度检测方法,该方法通过预先训练的车位检测模型进行车位识别,由于将车位类型与车位方向角度融合到一个深度神经网络中,提升了计算性能。并且角度和车位类型检测同时进行,所以车位分类网络中只需判断该车位是否是车位即可,不用判断车位的类别,通过车位方向角度辅助判断车位的类别。
需要说明的是,图4所示实施例是以泊车系统进行示例说明,在其他可能的实现方式中,泊车系统也可以将待检测图像发送至服务器,由服务器进行对车位及其方向角度进行检测。
以上为本申请实施例提供的模型训练方法和车位及其方向角度检测方法的一些具体实现方式,基于此,本申请实施例还提供了对应的模型训练装置和车位检测装置。下面从功能模块化的角度进行详细介绍。
参见图6所示的模型训练装置的结构示意图,该装置600包括:
获取模块610,用于获取训练样本,所述训练样本包括车位样本图像及其标注信息,所述标注信息包括车位位置和车位标签,所述车位标签用于标识所述车位样本图像是否标识真实车位以及所述真实车位对应的车位方向角度;
训练模块620,用于利用所述训练样本训练车位检测模型,在所述车位检测模型的损失函数满足训练结束条件时停止训练,所述车位检测模型以标注有车位位置的候选车位图像为输入,以车位检测结果为输出,其中,车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
可选的,所述回归损失通过平方损失、绝对值损失或平滑的平均绝对误差损失中任一种损失函数确定。
可选的,所述车位检测模型是基于AlexNet、VGGNet、GoogleNet或者MobileNet的网络模型。
接下来,参见图7所示的车位及其方向角度检测装置的结构示意图,该装 置700包括:
获取模块710,用于获取待检测图像;裁剪模块720,用于识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像;
检测模块730,用于通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果,所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
可选的,所述车位检测模型包括特征提取层、分类层和回归层;
所述检测模块730具体用于:
通过所述特征提取层对所述候选车位图像进行特征提取,得到与所述候选车位图像对应的图像特征;
根据所述图像特征,利用所述分类层对所述候选车位图像分类;
当分类结果表征所述候选车位图像标识真实车位时,根据所述图像特征利用所述回归层进行回归分析得到车位方向角度。
可选的,所述车位检测模型的损失函数根据分类损失和回归损失确定,所述分类损失用于衡量预测真实车位所产生的损失,所述回归损失用于衡量预测车位方向角度所产生的损失。
可选的,所述车位检测模型的损失函数为所述分类损失和所述回归损失的加权和值,所述分类层的分类结果表征所述候选车位图像标识非真实车位时,所述回归损失的权值为0。
可选的,所述车位检测模型通过训练样本训练得到,所述训练样本包括车位样本图像及其标注信息,所述标注信息包括车位样本图像对应的车位位置和车位标签,所述车位标签用于标识所述车位样本图像是否标识真实车位以及所述真实车位对应的车位方向角度。
可选的,所述车位标签标识所述车位样本图像标识真实车位时,所述车位标签具体用于标识所述车位样本图像标识垂直车位、平行车位和斜列车位。
可选的,所述车位位置通过车位入口线以及与所述车位入口线的两个端点分别相连的两条车位分离线表征。
可选的,所述车位入口线的中点到所述车位样本图像中与所述车位分离线平行或重合的两条边的距离相等。
本申请实施例还提供了一种用于实现本申请的车位及其方向角度检测方法的设备。该设备具体可以是服务器,下面将从硬件实体化的角度进行介绍。
如图8所示,该服务器800可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)822(例如,一个或一个以上处理器)和存储器832,一个或一个以上存储应用程序842或数据844的存储介质830(例如一个或一个以上海量存储设备)。其中,存储器832和存储介质830可以是短暂存储或持久存储。存储在存储介质830的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器822可以设置为与存储介质830通信,在服务器800上执行存储介质830中的一系列指令操作。
服务器800还可以包括一个或一个以上电源826,一个或一个以上有线或无线网络接口850,一个或一个以上输入输出接口858,和/或,一个或一个以上操作系统841,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
上述实施例中由服务器所执行的步骤可以基于该图8所示的服务器结构。
其中,CPU 822用于执行如下步骤:
获取待检测图像;
识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像;
通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果,所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
可选的,CPU822还用于执行本申请实施例提供的车位及其方向角度检测方法的任意一种实现方式的步骤。
本申请实施例还提供一种设备,所述设备包括处理器和存储器:
所述存储器用于存储计算机程序;
所述处理器用于调用所述存储器中的所述计算机程序,以执行如本申请所 述的车位及其方向角度检测方法。
本申请实施例还提供一种车辆,所述车辆包括泊车系统和控制器;
所述泊车系统用于执行上述车位及其方向角度检测方法确定可泊车车位,所述可泊车车位是根据车位检测结果标识为真实车位的候选车位图像确定的;
所述控制器用于根据所述可泊车车位控制车辆泊车。
本申请实施例还提供一种计算机可读存储介质,用于存储计算机程序,该计算机程序用于执行前述各个实施例所述的车位及其方向角度检测方法中的任意一种实施方式。
本申请实施例还提供一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行前述各个实施例所述的一种车位及其方向角度检测方法中的任意一种实施方式。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (12)

  1. 一种车位及其方向角度检测方法,其特征在于,所述方法包括:
    获取待检测图像;
    识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像;
    通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果,所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
  2. 根据权利要求1所述的方法,其特征在于,所述车位检测模型包括特征提取层、分类层和回归层;
    所述通过预先训练的车位检测模型对所述候选车位图像进行检测包括:
    通过所述特征提取层对所述候选车位图像进行特征提取,得到与所述候选车位图像对应的图像特征;
    根据所述图像特征,利用所述分类层对所述候选车位图像分类;
    当分类结果表征所述候选车位图像标识真实车位时,根据所述图像特征利用所述回归层进行回归分析得到车位方向角度。
  3. 根据权利要求1所述的方法,其特征在于,所述车位检测模型的损失函数根据分类损失和回归损失确定,所述分类损失用于衡量预测真实车位所产生的损失,所述回归损失用于衡量预测车位方向角度所产生的损失。
  4. 根据权利要求3所述的方法,其特征在于,所述车位检测模型的损失函数为所述分类损失和所述回归损失的加权和值,所述分类层的分类结果表征所述候选车位图像标识非真实车位时,所述回归损失的权值为0。
  5. 根据权利要求1所述的方法,其特征在于,所述车位检测模型通过训练样本训练得到,所述训练样本包括车位样本图像及其标注信息,所述标注信息包括车位样本图像对应的车位位置和车位标签,所述车位标签用于标识所述车位样本图像是否标识真实车位以及所述真实车位对应的车位方向角度。
  6. 根据权利要求5所述的方法,其特征在于,所述车位标签标识所述车位样本图像标识真实车位时,所述车位标签具体用于标识所述车位样本图像标识垂直车位、平行车位和斜列车位。
  7. 根据权利要求5所述的方法,其特征在于,所述车位位置通过车位入口线以及与所述车位入口线的两个端点分别相连的两条车位分离线表征。
  8. 根据权利要求7所述的方法,其特征在于,所述车位入口线的中点到所述车位样本图像中与所述车位分离线平行或重合的两条边的距离相等。
  9. 一种车位及其方向角度检测装置,其特征在于,所述装置包括:
    获取模块,用于获取待检测图像;
    识别模块,用于识别所述待检测图像中的车位角点,根据所述车位角点裁剪所述待检测图像得到候选车位图像;
    检测模块,用于通过预先训练的车位检测模型对所述候选车位图像进行检测,得到车位检测结果,所述车位检测结果用于表征所述候选车位图像是否标识真实车位,当所述候选车位图像标识真实车位时,所述车位检测结果还包括车位方向角度。
  10. 一种设备,其特征在于,所述设备包括处理器和存储器:
    所述存储器用于存储计算机程序;
    所述处理器用于调用所述存储器中的所述计算机程序,以执行如权利要求1至8任一项所述的车位及其方向角度检测方法。
  11. 一种车辆,其特征在于,所述车辆包括泊车系统和控制器;
    所述泊车系统用于执行如权利要求1至8任一项所述的车位及其方向角度检测方法确定可泊车车位,所述可泊车车位是根据车位检测结果标识为真实车位的候选车位图像确定的;
    所述控制器用于根据所述可泊车车位控制车辆泊车。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述的车位及其方向角度检测方法。
PCT/CN2020/102517 2019-10-12 2020-07-17 车位及其方向角度检测方法、装置、设备及介质 WO2021068588A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20874660.2A EP4044146A4 (en) 2019-10-12 2020-07-17 METHOD AND APPARATUS FOR DETECTING PARKING SPACE AND DIRECTION AND ANGLE THEREOF, DEVICE AND MEDIUM
US17/767,919 US20240092344A1 (en) 2019-10-12 2020-07-17 Method and apparatus for detecting parking space and direction and angle thereof, device and medium
JP2022521761A JP7414978B2 (ja) 2019-10-12 2020-07-17 駐車スペース及びその方向角検出方法、装置、デバイス及び媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910969470.1A CN110706509B (zh) 2019-10-12 2019-10-12 车位及其方向角度检测方法、装置、设备及介质
CN201910969470.1 2019-10-12

Publications (1)

Publication Number Publication Date
WO2021068588A1 true WO2021068588A1 (zh) 2021-04-15

Family

ID=69198621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102517 WO2021068588A1 (zh) 2019-10-12 2020-07-17 车位及其方向角度检测方法、装置、设备及介质

Country Status (5)

Country Link
US (1) US20240092344A1 (zh)
EP (1) EP4044146A4 (zh)
JP (1) JP7414978B2 (zh)
CN (1) CN110706509B (zh)
WO (1) WO2021068588A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537105A (zh) * 2021-07-23 2021-10-22 北京经纬恒润科技股份有限公司 一种车位检测方法及装置
CN113593245A (zh) * 2021-06-25 2021-11-02 北京云星宇交通科技股份有限公司 驻停车辆的电子巡查装置
CN115527189A (zh) * 2022-11-01 2022-12-27 杭州枕石智能科技有限公司 车位状态的检测方法、终端设备及计算机可读存储介质
CN116189137A (zh) * 2022-12-07 2023-05-30 深圳市速腾聚创科技有限公司 车位检测方法、电子设备及计算机可读存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706509B (zh) * 2019-10-12 2021-06-18 东软睿驰汽车技术(沈阳)有限公司 车位及其方向角度检测方法、装置、设备及介质
CN111311675B (zh) * 2020-02-11 2022-09-16 腾讯科技(深圳)有限公司 车辆定位方法、装置、设备和存储介质
CN111428616B (zh) * 2020-03-20 2023-05-23 东软睿驰汽车技术(沈阳)有限公司 一种车位检测方法、装置、设备及存储介质
CN112329601B (zh) * 2020-11-02 2024-05-07 东软睿驰汽车技术(沈阳)有限公司 基于多任务网络的车位检测方法和装置
CN113076896A (zh) * 2021-04-09 2021-07-06 北京骑胜科技有限公司 一种规范停车方法、系统、装置及存储介质
CN113095266B (zh) * 2021-04-19 2024-05-10 北京经纬恒润科技股份有限公司 一种角度识别方法、装置及设备
CN114379544A (zh) * 2021-12-31 2022-04-22 北京华玉通软科技有限公司 一种基于多传感器前融合的自动泊车系统、方法及装置
CN115206130B (zh) * 2022-07-12 2023-07-18 合众新能源汽车股份有限公司 一种车位检测方法、系统、终端及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06274796A (ja) * 1993-03-24 1994-09-30 Toyota Motor Corp 駐車空間検出装置
CN107886080A (zh) * 2017-11-23 2018-04-06 同济大学 一种泊车位检测方法
CN108875911A (zh) * 2018-05-25 2018-11-23 同济大学 一种泊车位检测方法
CN109614913A (zh) * 2018-12-05 2019-04-12 北京纵目安驰智能科技有限公司 一种斜车位识别方法、装置和存储介质
CN109740584A (zh) * 2019-04-02 2019-05-10 纽劢科技(上海)有限公司 基于深度学习的自动泊车停车位检测方法
CN109918977A (zh) * 2017-12-13 2019-06-21 华为技术有限公司 确定空闲车位的方法、装置及设备
CN110322680A (zh) * 2018-03-29 2019-10-11 纵目科技(上海)股份有限公司 一种基于指定点的单车位检测方法、系统、终端和存储介质
CN110706509A (zh) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 车位及其方向角度检测方法、装置、设备及介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI347900B (en) * 2009-03-10 2011-09-01 Univ Nat Chiao Tung Parking assistance system and method
TW201111206A (en) * 2009-09-29 2011-04-01 Automotive Res & Testing Ct Multiple-turn automatic parking device
KR101947826B1 (ko) * 2012-04-10 2019-02-13 현대자동차주식회사 차량의 주차구획 인식방법
US10325165B2 (en) * 2014-09-30 2019-06-18 Conduent Business Services, Llc Vision-based on-street parked vehicle detection via normalized-view classifiers and temporal filtering
CN106945660B (zh) * 2017-02-24 2019-09-24 宁波吉利汽车研究开发有限公司 一种自动泊车系统
CN107527017B (zh) * 2017-07-25 2021-03-12 纵目科技(上海)股份有限公司 停车位检测方法及系统、存储介质及电子设备
TWI651697B (zh) * 2018-01-24 2019-02-21 National Chung Cheng University 停車場空位偵測方法及其偵測模型建立方法
CN109583392A (zh) * 2018-12-05 2019-04-05 北京纵目安驰智能科技有限公司 一种车位检测方法、装置和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06274796A (ja) * 1993-03-24 1994-09-30 Toyota Motor Corp 駐車空間検出装置
CN107886080A (zh) * 2017-11-23 2018-04-06 同济大学 一种泊车位检测方法
CN109918977A (zh) * 2017-12-13 2019-06-21 华为技术有限公司 确定空闲车位的方法、装置及设备
CN110322680A (zh) * 2018-03-29 2019-10-11 纵目科技(上海)股份有限公司 一种基于指定点的单车位检测方法、系统、终端和存储介质
CN108875911A (zh) * 2018-05-25 2018-11-23 同济大学 一种泊车位检测方法
CN109614913A (zh) * 2018-12-05 2019-04-12 北京纵目安驰智能科技有限公司 一种斜车位识别方法、装置和存储介质
CN109740584A (zh) * 2019-04-02 2019-05-10 纽劢科技(上海)有限公司 基于深度学习的自动泊车停车位检测方法
CN110706509A (zh) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 车位及其方向角度检测方法、装置、设备及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4044146A4

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593245A (zh) * 2021-06-25 2021-11-02 北京云星宇交通科技股份有限公司 驻停车辆的电子巡查装置
CN113593245B (zh) * 2021-06-25 2022-10-21 北京云星宇交通科技股份有限公司 驻停车辆的电子巡查装置
CN113537105A (zh) * 2021-07-23 2021-10-22 北京经纬恒润科技股份有限公司 一种车位检测方法及装置
CN113537105B (zh) * 2021-07-23 2024-05-10 北京经纬恒润科技股份有限公司 一种车位检测方法及装置
CN115527189A (zh) * 2022-11-01 2022-12-27 杭州枕石智能科技有限公司 车位状态的检测方法、终端设备及计算机可读存储介质
CN115527189B (zh) * 2022-11-01 2023-03-21 杭州枕石智能科技有限公司 车位状态的检测方法、终端设备及计算机可读存储介质
CN116189137A (zh) * 2022-12-07 2023-05-30 深圳市速腾聚创科技有限公司 车位检测方法、电子设备及计算机可读存储介质
CN116189137B (zh) * 2022-12-07 2023-08-04 深圳市速腾聚创科技有限公司 车位检测方法、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN110706509B (zh) 2021-06-18
EP4044146A1 (en) 2022-08-17
EP4044146A4 (en) 2023-11-08
CN110706509A (zh) 2020-01-17
JP2022551717A (ja) 2022-12-13
US20240092344A1 (en) 2024-03-21
JP7414978B2 (ja) 2024-01-16

Similar Documents

Publication Publication Date Title
WO2021068588A1 (zh) 车位及其方向角度检测方法、装置、设备及介质
US11176388B2 (en) Tracking vehicles in a warehouse environment
CN108875911B (zh) 一种泊车位检测方法
TWI677825B (zh) 視頻目標跟蹤方法和裝置以及非易失性電腦可讀儲存介質
Luber et al. People tracking in rgb-d data with on-line boosted target models
KR101645722B1 (ko) 자동추적 기능을 갖는 무인항공기 및 그 제어방법
Keller et al. A new benchmark for stereo-based pedestrian detection
US8064639B2 (en) Multi-pose face tracking using multiple appearance models
CN110021033B (zh) 一种基于金字塔孪生网络的目标跟踪方法
Chang et al. Mobile robot monocular vision navigation based on road region and boundary estimation
CN111814752B (zh) 室内定位实现方法、服务器、智能移动设备、存储介质
KR101769601B1 (ko) 자동추적 기능을 갖는 무인항공기
US11348276B2 (en) Mobile robot control method
Ji et al. RGB-D SLAM using vanishing point and door plate information in corridor environment
CN111353451A (zh) 电瓶车检测方法、装置、计算机设备及存储介质
WO2020181426A1 (zh) 一种车道线检测方法、设备、移动平台及存储介质
JP5228148B2 (ja) 画像データから位置を推定する位置推定方法、位置推定装置及び位置推定プログラム
Zhang et al. Physical blob detector and multi-channel color shape descriptor for human detection
Chen et al. Visual detection of lintel-occluded doors by integrating multiple cues using a data-driven Markov chain Monte Carlo process
Kim et al. Vision-based navigation with efficient scene recognition
KR101656519B1 (ko) 확장칼만필터를 이용하여 추적성능을 높인 자동추적 기능을 갖는 무인항공기
KR20050052657A (ko) 비젼기반 사람 검출방법 및 장치
Nourani-Vatani et al. Topological localization using optical flow descriptors
Lee et al. Fast people counting using sampled motion statistics
Ahmad et al. Privacy Preserving Workflow Detection for Manufacturing Using Neural Networks based Object Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20874660

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022521761

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 17767919

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2020874660

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2020874660

Country of ref document: EP

Effective date: 20220512