CN116052116A - Automatic parking method based on multi-source information perception and end-to-end deep learning - Google Patents

Automatic parking method based on multi-source information perception and end-to-end deep learning Download PDF

Info

Publication number
CN116052116A
CN116052116A CN202310006308.6A CN202310006308A CN116052116A CN 116052116 A CN116052116 A CN 116052116A CN 202310006308 A CN202310006308 A CN 202310006308A CN 116052116 A CN116052116 A CN 116052116A
Authority
CN
China
Prior art keywords
automatic parking
training
cnn
data
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310006308.6A
Other languages
Chinese (zh)
Inventor
江浩斌
马振棚
马世典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202310006308.6A priority Critical patent/CN116052116A/en
Publication of CN116052116A publication Critical patent/CN116052116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the technical field of automatic parking, and discloses an automatic parking method based on multi-source information perception and end-to-end deep learning; sampling four paths of fish-eye image data, ultrasonic radar data, steering wheel rotation angle and vehicle speed data in the parking process in real time to construct an initial data set; preprocessing four paths of fisheye image data in the initial data set into looking-around image information data, and constructing a training sample; establishing and optimizing a CNN-LSTM neural network, wherein the CNN is used for processing the surrounding image information data, and the LSTM is used for processing the ultrasonic obstacle distance information data and the current driver driving information data; inputting a training sample to the CNN-LSTM neural network for training to obtain a trained end-to-end automatic parking model; and 3, performing real vehicle control by using an automatic parking model to realize end-to-end automatic parking. The invention solves the problems of low planning precision, low response speed and the like of the existing automatic parking method.

Description

Automatic parking method based on multi-source information perception and end-to-end deep learning
Technical Field
The invention relates to the technical field of automatic parking, in particular to an automatic parking method based on multi-source information perception and end-to-end deep learning.
Background
With the progress of society, the living standard of residents in China is continuously improved, and automobiles become important vehicles which are indispensable for people, but due to the fact that the vehicles are continuously increased, the urban congestion is a big problem in life, and great inconvenience is caused to the traveling of people. The automatic parking automobile is also called an unmanned parking automobile, a computer parking automobile or a wheel type mobile robot, and is an intelligent automobile for realizing unmanned parking through a computer system.
The parking motion strategy based on path planning and path tracking is combined with the kinematic constraint of the vehicle to plan a parking path, and then a control algorithm is used for path tracking. (the sensor is used for estimating the parking space and the vehicle body posture, and then the optimal parking path is planned). Although the conventional planning method (ex. arc method) can meet the requirements, the constraint conditions are correspondingly increased, the solving process becomes more complex, and the planning precision and response speed are reduced.
Disclosure of Invention
Aiming at the problems, in order to further improve the parking precision and response speed of automatic parking, the invention provides an automatic parking method based on multi-source information perception and end-to-end deep learning to realize end-to-end automatic parking.
In order to achieve the above purpose, the specific technical scheme of the invention is as follows: an automatic parking method based on multi-source information perception and end-to-end deep learning comprises the following steps:
1) Sampling the parking process at a sampling frequency f to construct an initial data set D; the initial dataset is noted as d= { D 1 ,d 2 ……d i ……},d i The ith sample data is denoted as d i ={Pf i ,Pb i ,Pl i ,Pr i ,left i ,right i ,back i ,r i ,v i -Pf therein i 、Pb i 、Pl i 、Pr i The images are respectively collected by four-way fish-eye cameras arranged under an automobile front engine cover, an automobile tail and a left rearview mirror and a right rearview mirror, and left i 、right i 、back i The distance between the automobile and the obstacle is measured by ultrasonic radars installed at the left side, the right side and the rear side of the automobile, respectively; r is (r) i Is the steering wheel angle, v, at the current sampling i The wheel speed of the automobile is the current sampling;
2) Constructing a training sample D' through the initial data set D;
3) Constructing and optimizing a CNN-LSTM neural network;
4) Training a neural network to obtain an automatic parking driving model;
5) And (5) utilizing an automatic parking model to control the real vehicle so as to realize automatic parking.
Further, the step 2) includes the steps of:
2.1 Calibrating the four vehicle-mounted fisheye cameras by a Zhang Zhengyou calibration method to obtain calibration parameters of the four vehicle-mounted fisheye cameras, wherein the calibration parameters comprise an inner parameter and an outer parameter;
2.2 The fisheye image Pf, pb, pl, pr is subjected to distortion correction by utilizing the internal parameters and the external parameters to obtain correction transformation graphs Pf ', pb', pl 'and Pr';
2.3 The correction transformation maps Pf ', pb', pl ', pr' are changed to top views Pf ", pb", pl ", pr";
2.4 Shearing and splicing the top views Pf ', pb', pl ', pr' to obtain a circular splice map P O
2.5 For the look-around mosaic P O Downsampling to output fixed-size imageP T
2.6 For image P) T Carrying out normalization processing to obtain a training image P;
2.7 A training sample D 'is constructed, denoted D' = { D 1 ’,d 2 ’……d i ’……},d i ' comprising cyclic image frame sequence data, ultrasonic obstacle distance information data and current driver driving information data, denoted as d i ’={P i ,left i ,right i ,back i ,r i ,v i The training Label is denoted Label= { left } i ,right i ,back i ,r i ,v i }。
Further, the step 3) includes the steps of:
3.1 A CNN-LSTM neural network is built, wherein the neural network comprises CNN, LSTM and a feature fusion layer, and the CNN part consists of 5 convolution layers, 5 pooling layers and 1 full connection layer; the LSTM part consists of 2 layers of full-connection layers, 1 layer of pooling layers and 20 LSTM units; the characteristic fusion layer part consists of 1 fusion layer and 2 full-connection layers;
3.2 Using Adam optimizer to optimize the neural network.
Further, the step 4) includes the steps of:
4.1 Inputting a training sample D';
4.2 Calculating a mean square error MSE, wherein the calculation formula is as follows:
Figure BDA0004036949290000021
pred is a prediction result in the training process, is a 2-dimensional tensor with the same size as a training Label Label, and i and j are coordinates of a row and a column; n is the batch size.
4.3 If MSE > MSE threshold a, go to step 4.1) to continue training, otherwise go to step 5).
The beneficial effects of the invention are as follows: the automatic parking system solves the problems that the existing automatic parking system adopts a path planning and tracking effect, has complicated work and low efficiency in actual parking calibration, has no parking operation of a skilled driver and cannot directly execute parking control according to the parking environment.
Drawings
Fig. 1 is a flowchart of an automatic parking method based on multi-source information sensing and end-to-end deep learning according to the present invention.
Detailed Description
The following description of specific embodiments of the invention is provided to facilitate an understanding of the invention by those skilled in the art, and in some instances, well-known means, elements, and circuits have not been described in detail so as not to obscure the invention.
The examples are preferred embodiments of the present invention, but the present invention is not limited to the above-described embodiments, and any obvious modifications, substitutions or variations that can be made by one skilled in the art without departing from the spirit of the present invention are within the scope of the present invention.
As shown in fig. 1, the invention provides an automatic parking method based on multi-source information perception and end-to-end deep learning, which comprises the following steps:
1) Sampling the parking process at a sampling frequency f to construct an initial data set D; the initial dataset is noted as d= { D 1 ,d 2 ……d i ……},d i The ith sample data is denoted as d i ={Pf i ,Pb i ,Pl i ,Pr i ,left i ,right i ,back i ,r i ,v i -Pf therein i 、Pb i 、Pl i 、Pr i The images are respectively collected by four-way fish-eye cameras arranged under an automobile front engine cover, an automobile tail and a left rearview mirror and a right rearview mirror, and left i 、right i 、back i The distance between the automobile and the obstacle is measured by ultrasonic radars installed at the left side, the right side and the rear side of the automobile, respectively; r is (r) i Is the steering wheel angle, v, at the current sampling i The wheel speed of the automobile is the current sampling;
2) Constructing a training sample D' through the initial data set D;
as a preferred embodiment of the present invention, the method comprises the steps of:
2.1 Calibrating the four vehicle-mounted fisheye cameras by a Zhang Zhengyou calibration method to obtain calibration parameters of the four vehicle-mounted fisheye cameras, wherein the calibration parameters comprise an inner parameter and an outer parameter;
2.2 The fisheye image Pf, pb, pl, pr is subjected to distortion correction by utilizing the internal parameters and the external parameters to obtain correction transformation graphs Pf ', pb', pl 'and Pr';
2.3 The correction transformation maps Pf ', pb', pl ', pr' are changed into top views; firstly, constructing a physical coordinate system by taking any corner point of a huge checkerboard as an origin, selecting at least 4 control points which are not on a straight line in the checkerboard, and recording the real physical coordinates of the control points; then, the positions of the control points are found in the corrected transformation diagram of the fish-eye image, and the image coordinates of the control points are recorded; finally, the physical coordinates of the control points are correlated with the image coordinates to obtain a homography matrix, and the homography matrix is utilized to change the correction transformation diagram of the fish-eye image into top views Pf ', pb', pl ', pr';
2.4 Shearing and splicing the overlook transformation maps Pf ', pb', pl ', pr', shearing selected areas on the overlook transformation maps in the front, back, left and right directions to obtain a shearing map P O1 、P O2 、P O3 、P O4 Splicing the shearing graphs to obtain a circular splice graph P O
2.5 For the look-around mosaic P O Downsampling to output 100×100×3 image P T
2.6 For image P) T Normalization processing is carried out, and the image P is adjusted T The saturation, contrast and brightness of the image are enhanced by Gaussian noise, and a training image P is obtained;
2.7 A training sample D 'is constructed, denoted D' = { D 1 ’,d 2 ’……d i ’……},d i ' comprising cyclic image frame sequence data, ultrasonic obstacle distance information data and current driver driving information data, denoted as d i ’={P i ,left i ,right i ,back i ,r i ,v i The training Label is denoted Label= { left } i ,right i ,back i ,r i ,v i }。
3) Constructing and optimizing a CNN-LSTM neural network;
as a preferred embodiment of the present invention, the method comprises the steps of:
3.1 A CNN-LSTM neural network is built, which comprises a CNN part, an LSTM part and a feature fusion layer, wherein the CNN part consists of 5 convolution layers, 5 pooling layers and 1 full connection layer; the LSTM part consists of 2 layers of full-connection layers, 1 layer of pooling layers and 20 LSTM units; the characteristic fusion layer part consists of 1 fusion layer and 2 full-connection layers;
3.2 Adam (adaptive moment estimation ) algorithm optimizers are employed to accelerate model convergence.
4) Training a neural network, namely inputting training sample data D' into the built deep neural network for training to obtain a trained end-to-end automatic parking driving model;
as a preferred embodiment of the present invention, the method comprises the steps of:
4.1 Inputting a training sample D';
convolution layer 1: convolution kernel 3 x 3, step size 1, the same filling, reLU activation;
pooling layer 1:2 x 1, step size 2;
convolution layer 2: convolution kernel 3 x 3, step size 1, the same filling, reLU activation;
pooling layer 2:2 x 2, step size 2;
convolution layer 3: convolution kernel 3 x 3, step size 1, the same filling, reLU activation;
pooling layer 3:2 x 2, step size 2;
convolution layer 4: convolution kernel 3 x 3, step size 1, the same filling, reLU activation;
pooling layer 4:2 x 2, step size 2;
convolution layer 5: convolution kernel 3 x 3, step size 1, the same filling, reLU activation;
pooling layer 5:2 x 2, step size 2;
full tie layer: 5120 neurons, tanh activated;
setting drop out and forgetting rate to be 0.1 so as to prevent over fitting;
LSTM:
full tie layer 1:20 neurons, reLU activated;
full tie layer 2:50 neurons, reLU activated;
LSTM layer: 20 LSTM units, step size 5;
4.2 A mean square error MSE is calculated, wherein the calculation formula of the MSE is:
Figure BDA0004036949290000051
pred is a prediction result in the training process, is a 2-dimensional tensor with the same size as a training Label Label, and i and j are coordinates of a row and a column; n is the batch size.
4.3 If MSE > MSE threshold a, turning to step 4.1) to continue training, otherwise, obtaining an automatic parking model, turning to step 5) to perform the next step, in the embodiment of the present invention, MSE threshold a=0.005;
5) Real vehicle control is carried out by using an automatic parking model to realize automatic parking:
acquiring and inputting a looking-around image in real time through a sensor, and acquiring ultrasonic radar distance data;
and outputting predicted steering wheel rotation angle data r and automobile speed data v.
And outputting a driving command by the end-to-end automatic parking model according to the real-time image information acquired by the four-way fish-eye cameras and the real-time distance information acquired by the ultrasonic radar, and completing control. The actual vehicle control result shows that the vehicle collision rate is only 2.8%, and the parking space deviation rate is only 1.2%.

Claims (4)

1. An automatic parking method based on multi-source information perception and end-to-end deep learning is characterized by comprising the following steps:
1) Sampling the parking process at a sampling frequency f to construct an initial data set D; the initial dataset is noted as d= { D 1 ,d 2 ……d i ……},d i The ith sample data is denoted as d i ={Pf i ,Pb i ,Pl i ,Pr i ,left i ,right i ,back i ,r i ,v i -Pf therein i 、Pb i 、Pl i 、Pr i The images are respectively collected by four-way fish-eye cameras arranged under an automobile front engine cover, an automobile tail and a left rearview mirror and a right rearview mirror, and left i 、right i 、back i The distance between the automobile and the obstacle is measured by ultrasonic radars installed at the left side, the right side and the rear side of the automobile, respectively; r is (r) i Is the steering wheel angle, v, at the current sampling i The wheel speed of the automobile is the current sampling;
2) Constructing a training sample D' through the initial data set D;
3) Constructing and optimizing a CNN-LSTM neural network;
4) Training a neural network to obtain an automatic parking driving model;
5) And (5) utilizing an automatic parking model to control the real vehicle so as to realize automatic parking.
2. The automatic parking method based on multi-source information sensing and end-to-end deep learning according to claim 1, wherein the step 2) comprises the steps of:
2.1 Calibrating the four vehicle-mounted fisheye cameras by a Zhang Zhengyou calibration method to obtain calibration parameters of the four vehicle-mounted fisheye cameras, wherein the calibration parameters comprise an inner parameter and an outer parameter;
2.2 The fisheye image Pf, pb, pl, pr is subjected to distortion correction by utilizing the internal parameters and the external parameters to obtain correction transformation graphs Pf ', pb', pl 'and Pr';
2.3 The correction transformation maps Pf ', pb', pl ', pr' are changed to top views Pf ", pb", pl ", pr";
2.4 Shearing and splicing the top views Pf ', pb', pl ', pr' to obtain a circular splice map P O
2.5 For the look-around mosaic P O Downsampling to output a fixed-size image P T
2.6 For image P) T Carrying out normalization processing to obtain a training image P;
2.7 A training sample D 'is constructed, denoted D' = { D 1 ’,d 2 ’……d i ’……},d i ' comprising cyclic image frame sequence data, ultrasonic obstacle distance information data and current driver driving information data, denoted as d i ’={P i ,left i ,right i ,back i ,r i ,v i The training Label is denoted Label= { left } i ,right i ,back i ,r i ,v i }。
3. The automatic parking method based on multi-source information sensing and end-to-end deep learning according to claim 1, wherein the step 3) comprises the steps of:
3.1 A CNN-LSTM neural network is built, wherein the neural network comprises CNN, LSTM and a feature fusion layer, and the CNN part consists of 5 convolution layers, 5 pooling layers and 1 full connection layer; the LSTM part consists of 2 layers of full-connection layers, 1 layer of pooling layers and 20 LSTM units; the characteristic fusion layer part consists of 1 fusion layer and 2 full-connection layers;
3.2 Using Adam optimizer to optimize the neural network.
4. The automatic parking method based on multi-source information sensing and end-to-end deep learning according to claim 1, wherein the step 4) comprises the steps of:
4.1 Inputting a training sample D';
4.2 Calculating a mean square error MSE, wherein the calculation formula is as follows:
Figure FDA0004036949280000021
pred is a prediction result in the training process, is a 2-dimensional tensor with the same size as a training Label Label, and i and j are coordinates of a row and a column; n is the batch size.
4.3 If MSE > MSE threshold a, go to step 4.1) to continue training, otherwise go to step 5).
CN202310006308.6A 2023-01-04 2023-01-04 Automatic parking method based on multi-source information perception and end-to-end deep learning Pending CN116052116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310006308.6A CN116052116A (en) 2023-01-04 2023-01-04 Automatic parking method based on multi-source information perception and end-to-end deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310006308.6A CN116052116A (en) 2023-01-04 2023-01-04 Automatic parking method based on multi-source information perception and end-to-end deep learning

Publications (1)

Publication Number Publication Date
CN116052116A true CN116052116A (en) 2023-05-02

Family

ID=86119589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310006308.6A Pending CN116052116A (en) 2023-01-04 2023-01-04 Automatic parking method based on multi-source information perception and end-to-end deep learning

Country Status (1)

Country Link
CN (1) CN116052116A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612458A (en) * 2023-05-30 2023-08-18 易飒(广州)智能科技有限公司 Deep learning-based parking path determination method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612458A (en) * 2023-05-30 2023-08-18 易飒(广州)智能科技有限公司 Deep learning-based parking path determination method and system

Similar Documents

Publication Publication Date Title
US11402848B2 (en) Collision-avoidance system for autonomous-capable vehicles
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN108909624B (en) Real-time obstacle detection and positioning method based on monocular vision
Wang et al. Automatic parking of vehicles: A review of literatures
CN108025765B (en) Device and method for reversing an articulated vehicle combination
CN110696818A (en) Automatic parking method and system based on optimal path
US10929995B2 (en) Method and apparatus for predicting depth completion error-map for high-confidence dense point-cloud
CN110371108B (en) Fusion method of vehicle-mounted ultrasonic radar and vehicle-mounted looking-around system
CN110243380B (en) Map matching method based on multi-sensor data and angle feature recognition
US20230066919A1 (en) Calibrating multiple inertial measurement units
CN110979313B (en) Automatic parking positioning method and system based on space map
US11623494B1 (en) Sensor calibration and verification using induced motion
CN113942524B (en) Vehicle running control method, system and computer readable storage medium
CN110509923B (en) Automatic driving path planning method, system, computer readable storage medium and vehicle
CN112507862A (en) Vehicle orientation detection method and system based on multitask convolutional neural network
CN116052116A (en) Automatic parking method based on multi-source information perception and end-to-end deep learning
US20200284912A1 (en) Adaptive sensor sytem for vehicle and method of operating the same
CN109398349A (en) A kind of automatic parking method and system based on geometric programming and intensified learning
US11899101B2 (en) Predicting the course of a road on the basis of radar data
US11869253B2 (en) Vehicle environment modeling with a camera
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
Li et al. An end-to-end fully automatic bay parking approach for autonomous vehicles
US20210016704A1 (en) Steerable scanning and perception system with active illumination
CN113978453B (en) Automatic parking method and system
US11243536B2 (en) Vehicular electronic device and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination