CN111376895B - Around-looking parking sensing method and device, automatic parking system and vehicle - Google Patents
Around-looking parking sensing method and device, automatic parking system and vehicle Download PDFInfo
- Publication number
- CN111376895B CN111376895B CN201811653281.5A CN201811653281A CN111376895B CN 111376895 B CN111376895 B CN 111376895B CN 201811653281 A CN201811653281 A CN 201811653281A CN 111376895 B CN111376895 B CN 111376895B
- Authority
- CN
- China
- Prior art keywords
- library
- parking
- processing
- vehicle
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000001514 detection method Methods 0.000 claims abstract description 113
- 238000012545 processing Methods 0.000 claims abstract description 104
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims abstract description 57
- 240000004050 Pentaglottis sempervirens Species 0.000 claims abstract description 41
- 238000013135 deep learning Methods 0.000 claims abstract description 31
- 238000003860 storage Methods 0.000 claims description 78
- 238000013528 artificial neural network Methods 0.000 claims description 40
- 238000013507 mapping Methods 0.000 claims description 20
- 238000012937 correction Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 14
- 230000008447 perception Effects 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/06—Automatic manoeuvring for parking
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
The application provides a look-around parking sensing method, a look-around parking sensing device, an automatic parking system and a vehicle, wherein the method and the device are applied to the automatic parking system of the vehicle, and specifically comprise the steps of receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned on the periphery of the vehicle; splicing a plurality of fisheye images into a bird's-eye view; processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position; and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path. According to the scheme, the related information of the parking space is directly obtained from the aerial view and the path is planned, so that the automatic parking system can be suitable for realizing automatic parking under the condition of no parking space identification, and the problem that the automatic parking system cannot park under the condition is solved.
Description
Technical Field
The application relates to the technical field of vehicles, in particular to a look-around parking sensing method and device, an automatic parking system and a vehicle.
Background
With the rapid development of intelligent driving technology, automatic parking systems are gradually beginning to be applied in low-speed intelligent driving scenes. The automatic parking system can enable an automobile to automatically find and accurately park at a proper parking position, sense the position and the state of a parking space around the automobile through the environment sensing sensor, find the proper parking space, plan out an optimal parking path and control the automobile to automatically finish parking.
The parking space perception scheme of the existing automatic parking system is mainly a ground storage space identification perception scheme based on a panoramic camera, and the scheme is derived from a 360-degree panoramic View Monitor (AVM) in early ADAS and is combined with a computer vision technology to finish the detection of the parking space in a spliced aerial View. In recent years, with the application of artificial intelligence in computer vision, the parking space detection technology based on look around is gradually mature, the detection precision is higher and higher, the scene coverage is wider and wider, and the parking space detection technology starts to be widely applied. However, the scheme mainly aims at parking spaces with obvious and clear storage space marks on the ground, and the automatic parking system cannot automatically park the parking spaces without the obvious storage space marks.
Disclosure of Invention
In view of this, the present application provides a look-around parking sensing method and apparatus, an automatic parking system, and a vehicle, which are used for performing path planning for the automatic parking system in an environment without an obvious parking space identifier, so as to solve the problem that the automatic parking system cannot park in such a situation.
In order to achieve the above object, the following solutions are proposed:
a look-around parking sensing method is applied to an automatic parking system of a vehicle, and comprises the following steps:
receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle;
splicing the plurality of fisheye images into a bird's-eye view;
processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position;
and planning a path according to the position coordinates and the position type to obtain an automatic parking path.
Optionally, the stitching the multiple fisheye images into a bird's-eye view includes:
carrying out distortion correction on each fisheye image;
carrying out coordinate mapping on the corrected fisheye image according to the calibration position of the camera corresponding to each fisheye image;
and splicing the plurality of fisheye images obtained after coordinate mapping processing to obtain the aerial view.
Optionally, the library position detection model includes a target detection neural network and a classification neural network, and the library position detection model obtained by deep learning is used to process the aerial view, including:
processing the aerial view by using the target detection neural network to obtain an image block of the space library bit and obtain a library bit coordinate of the space library bit;
and processing the image block by using the classification neural network to obtain the library bit type of the space library bit.
Optionally, if the aerial view cannot be obtained after the processing of the aerial view by using the library position detection model, the method includes the following steps:
displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user;
calculating the position coordinates and the position types of the virtual parking positions input by the user through the position setting window;
and executing the step of path planning according to the position coordinates and the position types of the virtual parking positions to obtain the automatic parking path.
Optionally, if a library bit line is found in the process of processing the bird's-eye view by using the library bit detection model, the method includes the following steps:
processing the aerial view according to the storage position line by using a space detection module obtained by deep learning to obtain an actual parking position, a storage position coordinate and a storage position type;
and executing the step of planning the path according to the position coordinates and the position type to obtain the automatic parking path.
A look-around parking sensing device is applied to an automatic parking system of a vehicle, and comprises:
the image receiving module is used for receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle;
the image splicing module is used for splicing the plurality of fisheye images into a bird's-eye view;
the first library position processing module is used for processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position;
and the path planning module is used for planning a path according to the position coordinates and the position types to obtain an automatic parking path.
Optionally, the image stitching module includes:
the distortion correction unit is used for carrying out distortion correction on each fisheye image;
the coordinate mapping unit is used for carrying out coordinate mapping on the corrected fisheye image according to the calibration position of the camera corresponding to each fisheye image;
and the splicing processing module is used for splicing a plurality of fisheye images obtained after coordinate mapping processing to obtain the aerial view.
Optionally, the library position detection model includes a target detection neural network and a classification neural network, and the first library position processing module includes:
the first processing unit is used for processing the aerial view by using the target detection neural network to obtain an image block of the space library position and obtain a library position coordinate of the space library position;
and the second processing unit is used for processing the image block by utilizing the classification neural network to obtain the library bit type of the space library bit.
Optionally, the method further includes:
the display control module is used for displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user if the spatial library position cannot be obtained after the aerial view is processed by using the library position detection model;
the data calculation module is used for calculating the storage position coordinates and the storage position types of the virtual parking positions input by the user through the storage position setting window;
and the first driving module is used for driving the path planning module to plan a path according to the position coordinates and the position types to obtain the automatic parking path.
Optionally, the method further includes:
the second storage processing module is used for processing the aerial view according to the storage bit lines by using the space detection module obtained by deep learning when the first storage processing module finds the storage bit lines in the process of processing the aerial view by using the storage detection model, so as to obtain an actual parking space, a storage coordinate and a storage type;
and the second driving module is used for determining that the path planning module carries out path planning according to the position coordinates and the position types to obtain the automatic parking path.
An automatic parking system is applied to a vehicle and is provided with the all-round parking sensing device.
An automatic parking system applied to a vehicle comprises at least one processor and a memory connected with the processor, wherein the memory stores a computer program or instructions, and the processor is used for executing the computer program or instructions to enable the automatic parking system to execute the following operations:
receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle;
splicing the plurality of fisheye images into a bird's-eye view;
processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position;
and planning a path according to the position coordinates and the position type to obtain an automatic parking path.
Optionally, if the aerial view is processed by using the library position detection model and a spatial library position cannot be obtained, the automatic parking system is further configured to perform the following operations:
displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user;
calculating the position coordinates and the position types of the virtual parking positions input by the user through the position setting window;
and executing the step of path planning according to the position coordinates and the position types of the virtual parking positions to obtain the automatic parking path.
Optionally, if a library bit line is found during the processing of the bird's eye view by using the library bit detection model, the automatic parking system is further configured to perform the following operations:
processing the aerial view according to the storage position line by using a space detection module obtained by deep learning to obtain an actual parking position, a storage position coordinate and a storage position type;
and executing the step of planning the path according to the position coordinates and the position type to obtain the automatic parking path.
A vehicle is provided with the automatic parking system as described above.
The technical scheme includes that the method and the device are applied to the automatic parking system of the vehicle, and specifically comprise the steps of receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle; splicing a plurality of fisheye images into a bird's-eye view; processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position; and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path. According to the scheme, the related information of the parking space is directly obtained from the aerial view and the path is planned, so that the automatic parking system can be suitable for realizing automatic parking under the condition of no parking space identification, and the problem that the automatic parking system cannot park under the condition is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for sensing a car park around in an embodiment of the present application;
fig. 2 is a flowchart illustrating steps of another method for sensing a parking lot in a look-around manner according to an embodiment of the present application;
fig. 3 is a flowchart illustrating steps of another method for sensing a parking lot around a vehicle according to an embodiment of the present application;
fig. 4 is a block diagram illustrating a structure of a car parking sensing device according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating another exemplary parking sensing apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating a structure of another around parking sensing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
Fig. 1 is a flowchart illustrating steps of a method for sensing a car park around in an embodiment of the present application.
As shown in fig. 1, the all-round parking sensing method provided in this embodiment is applied to an automatic parking system of a vehicle, and is used for providing an automatic parking path required for parking for the automatic parking system by using an all-round scheme, and the all-round parking sensing method specifically includes the following steps:
s101, receiving a plurality of fisheye images acquired by a plurality of fisheye lenses.
The vehicle that this scheme can be used is provided with a plurality of fisheye lenses all around to shoot the fisheye image of relevant position. The vehicle of the embodiment adopts four high-definition fisheye lenses with FOV of more than 180 degrees, and the four high-definition fisheye lenses are arranged on the front, the rear, the left and the right of the vehicle, and the position of each fisheye lens relative to the vehicle is calibrated in advance.
And as a basis of perception, after each fisheye lens acquires a corresponding fisheye image, receiving the fisheye image obtained by each fisheye lens, thereby obtaining four fisheye images. The concept of image here includes a picture or a video image.
And S102, splicing the fish-eye images into a bird' S-eye view.
After receiving the plurality of fisheye images, splicing the fisheye images into a bird's-eye view after corresponding processing. The specific process is as follows: firstly, distortion correction is carried out on each fisheye image; then, carrying out coordinate mapping on the fisheye image subjected to distortion correction according to the position of the fisheye lens for acquiring the corresponding language image, wherein the position of the fisheye lens is obtained by calibration in a manual mode; and finally, carrying out image splicing on the plurality of fisheye images subjected to coordinate mapping so as to obtain a corresponding aerial view.
And S103, processing the aerial view by using the library position detection model.
The method comprises the steps of processing the aerial view by using a library position detection model obtained by deep learning, and obtaining a space library position, and a library position coordinate and a library position type of the space library position from the aerial view. The whole bird's eye view can cover a range of more than 10 meters, the coordinate position of each pixel relative to the vehicle can be obtained through processing, and the coordinate of the image spot can be naturally obtained after the image spot of the reaction space library position is obtained.
The library bit detection module is a neural network obtained by a deep learning method. In order to obtain the library position detection model, original fisheye images under different scenes need to be collected, the bird's-eye views under different scenes are obtained through splicing according to the method, and the bird's-eye views under different scenes are marked, namely corresponding library positions are marked on the bird's-eye views, wherein the library positions comprise information such as the length, the width and the coordinates of a library position angle relative to a vehicle, and the type of the library positions; and then, performing off-line training by using a high-performance GPU, extracting effective characteristics of various types of library positions in different scenes, and finally obtaining the library position detection model.
In addition, the library position detection models respectively comprise a target detection neural network and a classification neural network, and under the condition that library position identifications such as a library position angle cannot be acquired from the aerial view, the aerial view is processed by using the target detection neural network to obtain image blocks of space library positions and coordinates of the image blocks, wherein the coordinates of the image blocks are library position coordinates; and then, processing the image blocks by utilizing a classification neural network so as to obtain the library bit types of the space library bits, wherein the library bit types comprise parallel library bits, vertical library bits, oblique library bits and the like.
And S104, planning the path according to the library position coordinates and the library position type.
After the position coordinates and the position types of the space positions are obtained, path planning is carried out by using the position coordinates and the position types, namely parameters such as a driving mode, a direction, a distance and the like between the current position of the vehicle and the space positions are planned, so that an automatic parking path required by the vehicle for parking is obtained. The automatic parking system can realize automatic parking control on the vehicle according to the automatic parking path, so that the vehicle can be parked to the space garage without user intervention.
From the technical scheme, the embodiment provides a panoramic parking sensing method, which is applied to an automatic parking system of a vehicle and specifically comprises the steps of receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle; splicing a plurality of fisheye images into a bird's-eye view; processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position; and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path. According to the scheme, the related information of the parking space is directly obtained from the aerial view and the path is planned, so that the automatic parking system can be suitable for realizing automatic parking under the condition of no parking space identification, and the problem that the automatic parking system cannot park under the condition is solved.
Example two
Fig. 2 is a flowchart illustrating steps of another method for sensing a parking lot in a surrounding manner according to an embodiment of the present application.
As shown in fig. 2, the method for sensing a parking lot in a look-around manner according to the present embodiment includes the following steps:
s201, receiving a plurality of fisheye images acquired by a plurality of fisheye lenses.
The vehicle that this scheme can be used is provided with a plurality of fisheye lenses all around to shoot the fisheye image of relevant position. The vehicle of the embodiment adopts four high-definition fisheye lenses with FOV of more than 180 degrees, and the four high-definition fisheye lenses are arranged on the front, the rear, the left and the right of the vehicle, and the position of each fisheye lens relative to the vehicle is calibrated in advance.
And as a basis of perception, after each fisheye lens acquires a corresponding fisheye image, receiving the fisheye image obtained by each fisheye lens, thereby obtaining four fisheye images. The concept of image here includes a picture or a video image.
S202, splicing the fish-eye images into a bird' S-eye view.
After receiving the plurality of fisheye images, splicing the fisheye images into a bird's-eye view after corresponding processing. The specific process is as follows: firstly, distortion correction is carried out on each fisheye image; then, carrying out coordinate mapping on the fisheye image subjected to distortion correction according to the position of the fisheye lens for acquiring the corresponding language image, wherein the position of the fisheye lens is obtained by calibration in a manual mode; and finally, carrying out image splicing on the plurality of fisheye images subjected to coordinate mapping so as to obtain a corresponding aerial view.
And S203, processing the aerial view by using the library position detection model.
The method comprises the steps of processing the aerial view by using a library position detection model obtained by deep learning, and obtaining a space library position, and a library position coordinate and a library position type of the space library position from the aerial view. If the space library bit cannot be obtained through processing, directly executing the next step; the bin position coordinates and bin position type are calculated only if the spatial bin position and bin position coordinates and bin position type are available.
The whole bird's eye view can cover a range of more than 10 meters, the coordinate position of each pixel relative to the vehicle can be obtained through processing, and the coordinate of the image spot can be naturally obtained after the image spot of the reaction space library position is obtained.
The library bit detection module is a neural network obtained by a deep learning method. In order to obtain the library position detection model, original fisheye images under different scenes need to be collected, the bird's-eye views under different scenes are obtained through splicing according to the method, and the bird's-eye views under different scenes are marked, namely corresponding library positions are marked on the bird's-eye views, wherein the library positions comprise information such as the length, the width and the coordinates of a library position angle relative to a vehicle, and the type of the library positions; and then, performing off-line training by using a high-performance GPU, extracting effective characteristics of various types of library positions in different scenes, and finally obtaining the library position detection model.
In addition, the library position detection models respectively comprise a target detection neural network and a classification neural network, and under the condition that library position identifications such as a library position angle cannot be acquired from the aerial view, the aerial view is processed by using the target detection neural network to obtain image blocks of space library positions and coordinates of the image blocks, wherein the coordinates of the image blocks are library position coordinates; and then, processing the image blocks by utilizing a classification neural network so as to obtain the library bit types of the space library bits, wherein the library bit types comprise parallel library bits, vertical library bits, oblique library bits and the like.
And S204, displaying a library position setting window.
The premise of this step is that when the bird's-eye view is processed by using the library level detection model, the spatial library level cannot be obtained, and in this case, the car machine of the vehicle or the mobile device bound in advance is controlled to display the library level setting window by executing the corresponding display instruction, specifically, the library level setting window is displayed by using the display interface of the car machine or the display interface of the mobile device. The storage position setting window is used for a user to input the virtual parking position by utilizing corresponding input means, such as data input, dragging and line drawing.
The virtual parking space is that a user operates on a bird's eye view displayed on the storage space setting window, and a frame with a preset size is selected from the bird's eye view as the virtual parking space.
S205, calculating the position coordinates and the position type according to the virtual parking position input by the user.
And when the user inputs the virtual parking space through the storage position setting window, receiving the virtual parking space and calculating the virtual parking space. Specifically, the virtual parking space is calculated by using the library position detection model to obtain the library position coordinates and the library position type of the virtual parking space.
And S206, planning the path according to the library position coordinates and the library position type.
After the position coordinates and the position types of the space positions are obtained, path planning is carried out by using the position coordinates and the position types, namely parameters such as a driving mode, a direction, a distance and the like between the current position of the vehicle and the space positions are planned, so that an automatic parking path required by the vehicle for parking is obtained. The automatic parking system can realize automatic parking control on the vehicle according to the automatic parking path, so that the vehicle can be parked to the space garage without user intervention.
It should be noted that, if the space bin can be detected by the bin detection model, the path planning is performed based on the bin coordinates and the bin type of the space bin calculated by the bin detection model; conversely, if a spatial pool is not detected, the path plan is based on the pool coordinates and the pool type for the virtual parking.
Compared with the previous embodiment, the embodiment can widen the application scene of automatic parking by receiving the virtual parking space input by the user, so that the path planning can be performed according to the virtual parking space input by the user even under the condition that the space storage position cannot be obtained according to the aerial view, and the use experience of the user is improved.
EXAMPLE III
Fig. 3 is a flowchart illustrating steps of another method for sensing a parking lot according to an embodiment of the present application.
As shown in fig. 3, the method for sensing parking in a look-around manner provided by this embodiment specifically includes the following steps:
s301, receiving a plurality of fisheye images acquired by a plurality of fisheye lenses.
The vehicle that this scheme can be used is provided with a plurality of fisheye lenses all around to shoot the fisheye image of relevant position. The vehicle of the embodiment adopts four high-definition fisheye lenses with FOV of more than 180 degrees, and the four high-definition fisheye lenses are arranged on the front, the rear, the left and the right of the vehicle, and the position of each fisheye lens relative to the vehicle is calibrated in advance.
And as a basis of perception, after each fisheye lens acquires a corresponding fisheye image, receiving the fisheye image obtained by each fisheye lens, thereby obtaining four fisheye images. The concept of image here includes a picture or a video image.
And S302, splicing the fish-eye images into a bird' S-eye view.
After receiving the plurality of fisheye images, splicing the fisheye images into a bird's-eye view after corresponding processing. The specific process is as follows: firstly, distortion correction is carried out on each fisheye image; then, carrying out coordinate mapping on the fisheye image subjected to distortion correction according to the position of the fisheye lens for acquiring the corresponding language image, wherein the position of the fisheye lens is obtained by calibration in a manual mode; and finally, carrying out image splicing on the plurality of fisheye images subjected to coordinate mapping so as to obtain a corresponding aerial view.
And S303, processing the aerial view by using the library position detection model.
The method comprises the steps of processing the aerial view by using a library position detection model obtained by deep learning, and obtaining a space library position, and a library position coordinate and a library position type of the space library position from the aerial view. If the library bit line is found in the bird 'S-eye view during the processing of the bird' S-eye view by the library bit detection module, executing step S304; if no library bit line is found and no spatial library bit is available, directly executing S305; the library bit coordinates and the library bit type are directly calculated only if the space library bit and the library bit coordinates and the library bit type can be obtained.
The whole bird's eye view can cover a range of more than 10 meters, the coordinate position of each pixel relative to the vehicle can be obtained through processing, and the coordinate of the image spot can be naturally obtained after the image spot of the reaction space library position is obtained.
The library bit detection module is a neural network obtained by a deep learning method. In order to obtain the library position detection model, original fisheye images under different scenes need to be collected, the bird's-eye views under different scenes are obtained through splicing according to the method, and the bird's-eye views under different scenes are marked, namely corresponding library positions are marked on the bird's-eye views, wherein the library positions comprise information such as the length, the width and the coordinates of a library position angle relative to a vehicle, and the type of the library positions; and then, performing off-line training by using a high-performance GPU, extracting effective characteristics of various types of library positions in different scenes, and finally obtaining the library position detection model.
In addition, the library position detection models respectively comprise a target detection neural network and a classification neural network, and under the condition that library position identifications such as a library position angle cannot be acquired from the aerial view, the aerial view is processed by using the target detection neural network to obtain image blocks of space library positions and coordinates of the image blocks, wherein the coordinates of the image blocks are library position coordinates; and then, processing the image blocks by utilizing a classification neural network so as to obtain the library bit types of the space library bits, wherein the library bit types comprise parallel library bits, vertical library bits, oblique library bits and the like.
And S304, processing the aerial view by using the space detection module.
The parking area can be preliminarily obtained through processing the garage position line, the aerial view is processed through the space detection module at the moment to obtain the position of the obstacle in the parking area, and the actual parking position is finally determined according to the parking area and the position of the obstacle; and further using a storage position detection module to obtain the storage position coordinates and the storage position types of the actual parking positions, wherein the calculation of the storage position coordinates and the storage position types is introduced in the foregoing, and is not repeated herein.
The spatial detection model also needs to be divided into training and online detection processes. The method comprises the steps of firstly, collecting original fisheye images in different scenes, splicing aerial views, marking the space data around the vehicle on the aerial views, and performing off-line training by using a high-performance GPU. In the detection inference stage, each pixel is classified on an image plane by utilizing a semantic segmentation network, whether the pixel belongs to a passable area or not is judged, the boundary contour of objects of different classifications and the distribution of the boundary contour on the image plane are obtained, and the distance information of the barrier is obtained by utilizing the position of the boundary contour in the aerial view, so that the space detection model is finally obtained.
S305, displaying a library position setting window.
The premise of this step is that when the bird's-eye view is processed by using the library level detection model, the spatial library level cannot be obtained, and in this case, the car machine of the vehicle or the mobile device bound in advance is controlled to display the library level setting window by executing the corresponding display instruction, specifically, the library level setting window is displayed by using the display interface of the car machine or the display interface of the mobile device. The storage position setting window is used for a user to input the virtual parking position by utilizing corresponding input means, such as data input, dragging and line drawing.
The virtual parking space is that a user operates on a bird's eye view displayed on the storage space setting window, and a frame with a preset size is selected from the bird's eye view as the virtual parking space.
And S306, calculating the position coordinate and the position type according to the virtual parking position input by the user.
And when the user inputs the virtual parking space through the storage position setting window, receiving the virtual parking space and calculating the virtual parking space. Specifically, the virtual parking space is calculated by using the library position detection model to obtain the library position coordinates and the library position type of the virtual parking space.
And S307, planning the path according to the library position coordinates and the library position type.
After the position coordinates and the position types of the space positions are obtained, path planning is carried out by using the position coordinates and the position types, namely parameters such as a driving mode, a direction, a distance and the like between the current position of the vehicle and the space positions are planned, so that an automatic parking path required by the vehicle for parking is obtained. The automatic parking system can realize automatic parking control on the vehicle according to the automatic parking path, so that the vehicle can be parked to the space garage without user intervention.
It is worth pointing out that if the library bit line exists, the library bit coordinate and the library bit type according to which the path planning is carried out are obtained by calculating the actual parking space; only when the library position detection model has no library position line and can detect the space library position, the path planning is carried out according to the library position coordinates and the library position type of the space library position calculated by the library position detection model; conversely, if a spatial pool is not detected, the path plan is based on the pool coordinates and the pool type for the virtual parking.
Compared with the two embodiments, the embodiment can directly detect the actual parking space under the condition of detecting the library bit line, and performs path planning according to the actual parking space, thereby avoiding subsequent complex calculation and improving the efficiency of path planning.
Example four
Fig. 4 is a block diagram illustrating a structure of a car parking sensing device according to an embodiment of the present disclosure.
As shown in fig. 4, the all-round parking sensing device provided in this embodiment is applied to an automatic parking system of a vehicle, and is configured to provide an automatic parking path required for parking for the automatic parking system by using an all-round scheme, where the all-round parking sensing device specifically includes an image receiving module 10, an image stitching module 20, a first parking space processing module 30, and a path planning module 40.
The image receiving module is used for receiving a plurality of fisheye images acquired by a plurality of fisheye lenses.
The vehicle that this scheme can be used is provided with a plurality of fisheye lenses all around to shoot the fisheye image of relevant position. The vehicle of the embodiment adopts four high-definition fisheye lenses with FOV of more than 180 degrees, and the four high-definition fisheye lenses are arranged on the front, the rear, the left and the right of the vehicle, and the position of each fisheye lens relative to the vehicle is calibrated in advance.
And as a basis of perception, after each fisheye lens acquires a corresponding fisheye image, receiving the fisheye image obtained by each fisheye lens, thereby obtaining four fisheye images. The concept of image here includes a picture or a video image.
The image splicing module is used for splicing the fish-eye images into a bird's-eye view.
After receiving the plurality of fisheye images, splicing the fisheye images into a bird's-eye view after corresponding processing. The module specifically comprises a distortion correction unit, a coordinate mapping unit and a splicing processing unit, wherein the distortion correction unit is used for carrying out distortion correction on each fisheye image; the coordinate mapping unit is used for carrying out coordinate mapping on the fisheye image subjected to distortion correction according to the position of the fisheye lens for acquiring the corresponding language image, wherein the position of the fisheye lens is obtained by calibration in a manual mode; and the splicing processing unit is used for splicing the images of the plurality of fisheye images subjected to coordinate mapping so as to obtain a corresponding aerial view.
The first library position processing module is used for processing the aerial view by using the library position detection model.
The method comprises the steps of processing the aerial view by using a library position detection model obtained by deep learning, and obtaining a space library position, and a library position coordinate and a library position type of the space library position from the aerial view. The whole bird's eye view can cover a range of more than 10 meters, the coordinate position of each pixel relative to the vehicle can be obtained through processing, and the coordinate of the image spot can be naturally obtained after the image spot of the reaction space library position is obtained.
The library bit detection module is a neural network obtained by a deep learning method. In order to obtain the library position detection model, original fisheye images under different scenes need to be collected, the bird's-eye views under different scenes are obtained through splicing according to the method, and the bird's-eye views under different scenes are marked, namely corresponding library positions are marked on the bird's-eye views, wherein the library positions comprise information such as the length, the width and the coordinates of a library position angle relative to a vehicle, and the type of the library positions; and then, performing off-line training by using a high-performance GPU, extracting effective characteristics of various types of library positions in different scenes, and finally obtaining the library position detection model.
In addition, the library position detection model respectively comprises a target detection neural network and a classification neural network, the corresponding module comprises a first processing unit and a second processing unit, the first processing unit is used for processing the aerial view by using the target detection neural network under the condition that library position identifications such as a library position angle cannot be acquired from the aerial view, so as to acquire an image block of a space library position and coordinates of the image block, and the coordinates of the image block are library position coordinates; the second processing unit processes the image blocks by using the classification neural network so as to obtain the library bit types of the space library bits, wherein the library bit types comprise parallel library bits, vertical library bits, oblique library bits and the like.
And the road strength planning module carries out path planning according to the library position coordinates and the library position types.
After the position coordinates and the position types of the space positions are obtained, path planning is carried out by using the position coordinates and the position types, namely parameters such as a driving mode, a direction, a distance and the like between the current position of the vehicle and the space positions are planned, so that an automatic parking path required by the vehicle for parking is obtained. The automatic parking system can realize automatic parking control on the vehicle according to the automatic parking path, so that the vehicle can be parked to the space garage without user intervention.
According to the technical scheme, the automatic parking system applied to the vehicle is finally used for looking around parking perception, and specifically comprises a plurality of fisheye images obtained by a plurality of fisheye lenses positioned around the vehicle; splicing a plurality of fisheye images into a bird's-eye view; processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position; and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path. According to the scheme, the related information of the parking space is directly obtained from the aerial view and the path is planned, so that the automatic parking system can be suitable for realizing automatic parking under the condition of no parking space identification, and the problem that the automatic parking system cannot park under the condition is solved.
EXAMPLE five
Fig. 5 is a block diagram of another final structure of a car parking sensing system according to an embodiment of the present disclosure.
As shown in fig. 5, the present embodiment provides a car park around sensing device, which is formed by adding a display control module 50, a data calculation module 60 and a first driving module 70 to the previous embodiment.
The display control module is used for displaying the library position setting window.
Namely, when the first garage position processing module utilizes the garage position detection model to process the aerial view and cannot obtain a spatial garage position, the display control module controls a vehicle machine of the vehicle or a mobile device bound in advance to display a garage position setting window by executing a corresponding display instruction, and specifically, the garage position setting window is displayed by utilizing a display interface of the vehicle machine or a display interface of the mobile device. The storage position setting window is used for a user to input the virtual parking position by utilizing corresponding input means, such as data input, dragging and line drawing.
The virtual parking space is that a user operates on a bird's eye view displayed on the storage space setting window, and a frame with a preset size is selected from the bird's eye view as the virtual parking space.
And the data calculation module is used for calculating the position coordinates and the position types of the storehouses according to the virtual parking positions input by the users.
And when the user inputs the virtual parking space through the storage position setting window, receiving the virtual parking space and calculating the virtual parking space. Specifically, the virtual parking space is calculated by using the library position detection model to obtain the library position coordinates and the library position type of the virtual parking space.
The first driving module is used for driving the path driving module to plan a path according to the library position coordinates and the library position types.
Parameters such as driving modes, directions, distances and the like between the current position of the vehicle and the space storage position are planned, and therefore the automatic parking path required by the vehicle for parking is obtained. The automatic parking system can realize automatic parking control on the vehicle according to the automatic parking path, so that the vehicle can be parked to the space garage without user intervention.
It should be noted that, if the space bin can be detected by the bin detection model, the path planning is performed based on the bin coordinates and the bin type of the space bin calculated by the bin detection model; conversely, if a spatial pool is not detected, the path plan is based on the pool coordinates and the pool type for the virtual parking.
Compared with the previous embodiment, the embodiment can widen the application scene of automatic parking by receiving the virtual parking space input by the user, so that the path planning can be performed according to the virtual parking space input by the user even under the condition that the space storage position cannot be obtained according to the aerial view, and the use experience of the user is improved.
EXAMPLE six
Fig. 6 is a block diagram illustrating a structure of another around parking sensing device according to an embodiment of the present application.
As shown in fig. 6, the all-round parking sensing apparatus provided in this embodiment is additionally provided with a second parking space processing module 80 and a second driving module 90 on the basis of the previous embodiment.
The second storage position processing module is used for processing the aerial view by using the space detection module.
Specifically, under the condition that a first storage processing module finds that a storage line exists through processing of a bird's-eye view, a parking area is obtained preliminarily through processing of the storage line, at the moment, the bird's-eye view is further processed through a space detection module, the position of an obstacle in the parking area is obtained, and an actual parking space is finally determined according to the parking area and the position of the obstacle; and further using a storage position detection module to obtain the storage position coordinates and the storage position types of the actual parking positions, wherein the calculation of the storage position coordinates and the storage position types is introduced in the foregoing, and is not repeated herein.
And the second driving module is used for planning the path according to the library position coordinates and the library position type.
Parameters such as driving modes, directions, distances and the like between the current position of the vehicle and the space storage position are planned, and therefore the automatic parking path required by the vehicle for parking is obtained. The automatic parking system can realize automatic parking control on the vehicle according to the automatic parking path, so that the vehicle can be parked to the space garage without user intervention.
It is worth pointing out that if the library bit line exists, the library bit coordinate and the library bit type according to which the path planning is carried out are obtained by calculating the actual parking space; only when the library position detection model has no library position line and can detect the space library position, the path planning is carried out according to the library position coordinates and the library position type of the space library position calculated by the library position detection model; conversely, if a spatial pool is not detected, the path plan is based on the pool coordinates and the pool type for the virtual parking.
Compared with the two embodiments, the embodiment can directly detect the actual parking space under the condition of detecting the library bit line, and performs path planning according to the actual parking space, thereby avoiding subsequent complex calculation and improving the efficiency of path planning.
EXAMPLE seven
The embodiment provides an automatic parking system, which is applied to a vehicle and is provided with the all-round parking sensing device provided by any one of the fourth to sixth embodiments. The panoramic parking sensing device is used for receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around a vehicle; splicing a plurality of fisheye images into a bird's-eye view; processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position; and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path. According to the scheme, the related information of the parking space is directly obtained from the aerial view and the path is planned, so that the automatic parking system can be suitable for realizing automatic parking under the condition of no parking space identification, and the problem that the automatic parking system cannot park under the condition is solved.
Example eight
The embodiment provides an automatic parking system which is applied to a vehicle and comprises at least one processor and a memory which are connected through a data bus.
The memory is used for storing a computer program or instructions, the processor acquires the computer program or instructions from the memory through the data bus and executes the computer program or instructions, and the automatic parking system can execute the following operations through executing the computer program or instructions:
receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around a vehicle;
splicing a plurality of fisheye images into a bird's-eye view;
processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position;
and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path.
The automatic parking system can also be used for processing the aerial view by using the parking space detection model and then obtaining a spatial parking space, and executing the following operations:
displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user;
receiving a virtual parking space input by a user through a storage space setting window;
calculating the position coordinates and the position types of the virtual parking positions;
and executing a path planning step according to the position coordinates and the position types of the virtual parking positions to obtain an automatic parking path.
In addition, in the process of processing the bird's eye view by using the library bit detection model, if the library bit line is found, the automatic parking system is further configured to perform the following operations:
processing the aerial view according to the library position line by using a space detection module obtained by deep learning to obtain an actual parking position, a library position coordinate and a library position type;
and executing a step of path planning according to the library position coordinates and the library position type to obtain the automatic parking path.
Example nine
The embodiment provides a vehicle, which is provided with the automatic parking system in the seventh embodiment or the eighth embodiment, wherein the automatic parking system is provided with a looking-around parking sensing device, and the device is used for receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned on the periphery of the vehicle; splicing a plurality of fisheye images into a bird's-eye view; processing the aerial view by using a library position detection model obtained by deep learning to obtain a library position coordinate and a library position type of a space library position; and planning the path according to the library position coordinates and the library position type to obtain the automatic parking path. According to the scheme, the related information of the parking space is directly obtained from the aerial view and the path is planned, so that the automatic parking system can be suitable for realizing automatic parking under the condition of no parking space identification, and the problem that the automatic parking system cannot park under the condition is solved.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The technical solutions provided by the present application are introduced in detail, and specific examples are applied in the description to explain the principles and embodiments of the present application, and the descriptions of the above examples are only used to help understanding the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (13)
1. A look-around parking sensing method is applied to an automatic parking system of a vehicle, and is characterized by comprising the following steps:
receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle;
splicing the plurality of fisheye images into a bird's-eye view;
processing the aerial view by using a library position detection model obtained by deep learning to obtain library position coordinates and a library position type of a space library position, wherein the library position detection model comprises a target detection neural network and a classification neural network, and the library position detection model obtained by deep learning is used for processing the aerial view and comprises the following steps: if the library position identification cannot be acquired from the aerial view, processing the aerial view by using the target detection neural network to acquire an image block of the space library position, acquiring a library position coordinate of the space library position, and processing the image block by using the classification neural network to acquire a library position type of the space library position; the storage position types comprise a parallel storage position, a vertical storage position and an oblique storage position;
and planning a path according to the position coordinates and the position type to obtain an automatic parking path.
2. The method for sensing a vehicle parking lot as claimed in claim 1, wherein said stitching said plurality of fisheye images into a bird's eye view comprises:
carrying out distortion correction on each fisheye image;
carrying out coordinate mapping on the corrected fisheye image according to the calibration position of the camera corresponding to each fisheye image;
and splicing the plurality of fisheye images obtained after coordinate mapping processing to obtain the aerial view.
3. The method for sensing a vehicle parking lot as claimed in claim 1, wherein if the spatial library position cannot be obtained after the bird's eye view is processed by the library position detection model, the method comprises the steps of:
displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user;
calculating the position coordinates and the position types of the virtual parking positions input by the user through the position setting window;
and executing the step of path planning according to the position coordinates and the position types of the virtual parking positions to obtain the automatic parking path.
4. A method of looking around vehicle parking sensing as claimed in claim 1 wherein if a library bit line is found during processing of said bird's eye view using said library bit detection model, comprising the steps of:
processing the aerial view according to the storage position line by using a space detection module obtained by deep learning to obtain an actual parking position, a storage position coordinate and a storage position type;
and executing the step of planning the path according to the position coordinates and the position type to obtain the automatic parking path.
5. A look-around parking sensing device is applied to an automatic parking system of a vehicle, and is characterized by comprising:
the image receiving module is used for receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle;
the image splicing module is used for splicing the plurality of fisheye images into a bird's-eye view;
the first library position processing module is used for processing the aerial view by using a library position detection model obtained by deep learning to obtain library position coordinates and a library position type of a space library position, the library position detection model comprises a target detection neural network and a classification neural network, the first library position processing module comprises a first processing unit and a second processing unit, if a library position identifier cannot be obtained from the aerial view, the first processing unit is used for processing the aerial view by using the target detection neural network to obtain image blocks of the space library position and obtain the library position coordinates of the space library position, and the second processing unit is used for processing the image blocks by using the classification neural network to obtain the library position type of the space library position; the storage position types comprise a parallel storage position, a vertical storage position and an oblique storage position;
and the path planning module is used for planning a path according to the position coordinates and the position types to obtain an automatic parking path.
6. The vehicle parking sensing apparatus as claimed in claim 5, wherein the image stitching module comprises:
the distortion correction unit is used for carrying out distortion correction on each fisheye image;
the coordinate mapping unit is used for carrying out coordinate mapping on the corrected fisheye image according to the calibration position of the camera corresponding to each fisheye image;
and the splicing processing module is used for splicing a plurality of fisheye images obtained after coordinate mapping processing to obtain the aerial view.
7. The vehicle parking sensing apparatus as defined in claim 5, further comprising:
the display control module is used for displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user if the spatial library position cannot be obtained after the aerial view is processed by using the library position detection model;
the data calculation module is used for calculating the storage position coordinates and the storage position types of the virtual parking positions input by the user through the storage position setting window;
and the first driving module is used for driving the path planning module to plan a path according to the position coordinates and the position types to obtain the automatic parking path.
8. The vehicle parking sensing apparatus as defined in claim 5, further comprising:
the second storage processing module is used for processing the aerial view according to the storage bit lines by using the space detection module obtained by deep learning when the first storage processing module finds the storage bit lines in the process of processing the aerial view by using the storage detection model, so as to obtain an actual parking space, a storage coordinate and a storage type;
and the second driving module is used for determining that the path planning module carries out path planning according to the library position coordinates and the library position type to obtain the automatic parking path.
9. An automatic parking system applied to a vehicle is characterized in that the all-round parking sensing device is provided according to any one of claims 5 to 8.
10. An automatic parking system applied to a vehicle, comprising at least one processor and a memory connected thereto, wherein the memory stores a computer program or instructions, and the processor is configured to execute the computer program or instructions to cause the automatic parking system to perform the following operations:
receiving a plurality of fisheye images acquired by a plurality of fisheye lenses positioned around the vehicle;
splicing the plurality of fisheye images into a bird's-eye view;
processing the aerial view by using a library position detection model obtained by deep learning to obtain library position coordinates and a library position type of a space library position, wherein the library position detection model comprises a target detection neural network and a classification neural network, and the library position detection model obtained by deep learning is used for processing the aerial view and comprises the following steps: if the library position identification cannot be acquired from the aerial view, processing the aerial view by using the target detection neural network to acquire an image block of the space library position, acquiring a library position coordinate of the space library position, and processing the image block by using the classification neural network to acquire a library position type of the space library position; the storage position types comprise a parallel storage position, a vertical storage position and an oblique storage position;
and planning a path according to the position coordinates and the position type to obtain an automatic parking path.
11. The automatic parking system of claim 10 wherein if the aerial view is processed by the garage position detection model and a spatial garage position is not available, the automatic parking system is further configured to:
displaying a library position setting window on a display interface of the vehicle machine or a display interface of the mobile equipment of the user;
calculating the position coordinates and the position types of the virtual parking positions input by the user through the position setting window;
and executing the step of path planning according to the position coordinates and the position types of the virtual parking positions to obtain the automatic parking path.
12. The automated parking system of claim 10 wherein if a library bit line is found during processing of the bird's eye view using the library bit detection model, the automated parking system is further configured to:
processing the aerial view according to the storage position line by using a space detection module obtained by deep learning to obtain an actual parking position, a storage position coordinate and a storage position type;
and executing the step of planning the path according to the position coordinates and the position type to obtain the automatic parking path.
13. A vehicle provided with an automatic parking system according to any one of claims 10 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811653281.5A CN111376895B (en) | 2018-12-29 | 2018-12-29 | Around-looking parking sensing method and device, automatic parking system and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811653281.5A CN111376895B (en) | 2018-12-29 | 2018-12-29 | Around-looking parking sensing method and device, automatic parking system and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111376895A CN111376895A (en) | 2020-07-07 |
CN111376895B true CN111376895B (en) | 2022-03-25 |
Family
ID=71221215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811653281.5A Active CN111376895B (en) | 2018-12-29 | 2018-12-29 | Around-looking parking sensing method and device, automatic parking system and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111376895B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114078239A (en) * | 2020-08-13 | 2022-02-22 | 纵目科技(上海)股份有限公司 | Vehicle boundary line detection device and detection method of intelligent parking system |
CN112298168B (en) * | 2020-11-06 | 2022-04-22 | 北京罗克维尔斯科技有限公司 | Parking space detection method and device and automatic parking method and device |
CN112669615B (en) * | 2020-12-09 | 2023-04-25 | 上汽大众汽车有限公司 | Parking space detection method and system based on camera |
CN112418183A (en) * | 2020-12-15 | 2021-02-26 | 广州小鹏自动驾驶科技有限公司 | Parking lot element extraction method and device, electronic equipment and storage medium |
CN112906946B (en) * | 2021-01-29 | 2024-03-29 | 北京百度网讯科技有限公司 | Road information prompting method, device, equipment, storage medium and program product |
CN113370993A (en) * | 2021-06-11 | 2021-09-10 | 北京汽车研究总院有限公司 | Control method and control system for automatic driving of vehicle |
CN113513985B (en) * | 2021-06-30 | 2023-05-16 | 广州小鹏自动驾驶科技有限公司 | Optimization method and device for precision detection, electronic equipment and medium |
CN113409194B (en) * | 2021-06-30 | 2024-03-22 | 上海汽车集团股份有限公司 | Parking information acquisition method and device, and parking method and device |
CN113513984B (en) * | 2021-06-30 | 2024-01-09 | 广州小鹏自动驾驶科技有限公司 | Parking space recognition precision detection method and device, electronic equipment and storage medium |
CN113886042A (en) * | 2021-09-28 | 2022-01-04 | 安徽江淮汽车集团股份有限公司 | TDA2X vehicle gauge control platform-based parking space recognition algorithm deployment and scheduling method |
CN114030463B (en) * | 2021-11-23 | 2024-05-14 | 上海汽车集团股份有限公司 | Path planning method and device for automatic parking system |
CN118521996A (en) * | 2024-07-25 | 2024-08-20 | 苏州魔视智能科技有限公司 | Parking space identification method and device, computer equipment and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101877570B1 (en) * | 2012-04-04 | 2018-07-11 | 현대자동차주식회사 | Apparatus for setting parking position based on around view image and method thereof |
KR20150034397A (en) * | 2013-09-26 | 2015-04-03 | 서진이엔에스(주) | A management method of a parking area |
KR102153030B1 (en) * | 2013-11-05 | 2020-09-07 | 현대모비스 주식회사 | Apparatus and Method for Assisting Parking |
CN103600707B (en) * | 2013-11-06 | 2016-08-17 | 同济大学 | A kind of parking position detection device and method of Intelligent parking system |
CN107886080A (en) * | 2017-11-23 | 2018-04-06 | 同济大学 | One kind is parked position detecting method |
CN108154472B (en) * | 2017-11-30 | 2021-10-08 | 惠州市德赛西威汽车电子股份有限公司 | Parking space visual detection method and system integrating navigation information |
CN107993488B (en) * | 2017-12-13 | 2021-07-06 | 深圳市航盛电子股份有限公司 | Parking space identification method, system and medium based on fisheye camera |
CN108875911B (en) * | 2018-05-25 | 2021-06-18 | 同济大学 | Parking space detection method |
-
2018
- 2018-12-29 CN CN201811653281.5A patent/CN111376895B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111376895A (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111376895B (en) | Around-looking parking sensing method and device, automatic parking system and vehicle | |
CN110758246B (en) | Automatic parking method and device | |
CN112180373B (en) | Multi-sensor fusion intelligent parking system and method | |
KR101854554B1 (en) | Method, device and storage medium for calculating building height | |
JP6700373B2 (en) | Apparatus and method for learning object image packaging for artificial intelligence of video animation | |
CN108107897B (en) | Real-time sensor control method and device | |
CN111754388B (en) | Picture construction method and vehicle-mounted terminal | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN113901961B (en) | Parking space detection method, device, equipment and storage medium | |
WO2020181426A1 (en) | Lane line detection method and device, mobile platform, and storage medium | |
CN111768332A (en) | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device | |
CN112753038A (en) | Method and device for identifying lane change trend of vehicle | |
CN115965934A (en) | Parking space detection method and device | |
CN116343085A (en) | Method, system, storage medium and terminal for detecting obstacle on highway | |
CN114379544A (en) | Automatic parking system, method and device based on multi-sensor pre-fusion | |
CN111046809A (en) | Obstacle detection method, device and equipment and computer readable storage medium | |
CN110727269A (en) | Vehicle control method and related product | |
CN116142172A (en) | Parking method and device based on voxel coordinate system | |
CN113421191B (en) | Image processing method, device, equipment and storage medium | |
CN115565155A (en) | Training method of neural network model, generation method of vehicle view and vehicle | |
CN115249345A (en) | Traffic jam detection method based on oblique photography three-dimensional live-action map | |
CN112498338B (en) | Stock level determination method and device and electronic equipment | |
CN118082811B (en) | Parking control method, device, equipment and medium | |
CN115330695A (en) | Parking information determination method, electronic device, storage medium and program product | |
WO2020248851A1 (en) | Parking space detection method and apparatus, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200630 Address after: 201203 Shanghai Zhangjiang High Tech Park of Pudong New Area Songtao Road No. 563 Building No. 1 room 509 Applicant after: SAIC MOTOR Corp.,Ltd. Address before: Room 509, building 1, 563 Songtao Road, Shanghai pilot Free Trade Zone, 201203 Applicant before: SAIC Motor Corp.,Ltd. |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |