WO2022165614A1 - 一种路径构建方法、装置、终端及存储介质 - Google Patents

一种路径构建方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2022165614A1
WO2022165614A1 PCT/CN2020/137305 CN2020137305W WO2022165614A1 WO 2022165614 A1 WO2022165614 A1 WO 2022165614A1 CN 2020137305 W CN2020137305 W CN 2020137305W WO 2022165614 A1 WO2022165614 A1 WO 2022165614A1
Authority
WO
WIPO (PCT)
Prior art keywords
drivable
path
vehicle
driving
preset
Prior art date
Application number
PCT/CN2020/137305
Other languages
English (en)
French (fr)
Inventor
张剑锋
林潇
宇文志强
Original Assignee
浙江吉利控股集团有限公司
宁波吉利汽车研究开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江吉利控股集团有限公司, 宁波吉利汽车研究开发有限公司 filed Critical 浙江吉利控股集团有限公司
Priority to PCT/CN2020/137305 priority Critical patent/WO2022165614A1/zh
Priority to CN202080108019.1A priority patent/CN117015814A/zh
Priority to EP20968180.8A priority patent/EP4296888A1/en
Publication of WO2022165614A1 publication Critical patent/WO2022165614A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/02Registering or indicating driving, working, idle, or waiting time only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the invention relates to the technical field of self-learning of autonomous vehicles, and in particular, to a path construction method, device, terminal and storage medium.
  • the present invention discloses a path construction method. During the process of driving on the preset driving path in the underground garage, the vehicle automatically learns to obtain the path trajectory, so as to facilitate the subsequent vehicles in the automatic driving process. path is automatically planned.
  • the present invention provides a path construction method, the method includes:
  • the method includes:
  • a top view of the target corresponding to the initial image is obtained by calculating through a nonlinear difference correction algorithm
  • the partition image includes an image of a drivable area and an image of a non-drivable area
  • a path trajectory corresponding to the vehicle driving state information is generated based on the drivable area, where the path trajectory is one of the preset driving paths.
  • the method further includes:
  • a map corresponding to the preset driving path is constructed based on the path trajectory.
  • the obtaining the vehicle driving state information in real time and the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information includes:
  • vehicle traveling state information includes the driving strategy and the driver's driving habits during the vehicle traveling on the preset driving path;
  • an initial image of the surrounding environment of the preset driving path is acquired in real time during the process of the vehicle traveling on the preset driving path.
  • the drivable area includes a drivable road and a drivable intersection
  • obtaining a driving strategy of the vehicle during driving on the preset driving path includes:
  • the driving strategy of the vehicle is determined according to the mileage of the vehicle and the heading angle, and the driving strategy of the vehicle includes the mileage on the drivable road and whether to turn at the drivable intersection.
  • the drivable area includes a drivable road and a drivable intersection; obtaining the driving habits of the driver during the process of the vehicle traveling on the preset driving path includes:
  • the features are input into the fully connected network, and the driving habit of the driver during the driving process of the vehicle on the preset driving path is predicted; the driving habit includes the driving speed of the drivable road and the steering angle of the drivable intersection.
  • the method further includes: when driving on the preset driving path again,
  • a top view of the target corresponding to the initial image is obtained by calculating through a nonlinear difference correction algorithm
  • a current path track corresponding to the vehicle driving state information is generated based on the drivable area, where the path track is one of the preset driving paths.
  • the method further includes:
  • Multi-track fusion is performed on the current path track and the previously obtained path track, and a map corresponding to the preset driving path is reconstructed.
  • the method before performing multi-trajectory fusion on the current path trajectory and the previously obtained path trajectory and reconstructing the map corresponding to the preset driving path, the method further includes:
  • the current path trajectory and the previously obtained path trajectory are fused.
  • it also includes:
  • the degree of coincidence between the current path trajectory and the previously obtained path trajectory is smaller than a preset first threshold, it is determined whether the matching degree between the current path trajectory and the preset driving path is smaller than the previously obtained the matching degree of the path trajectory and the preset driving path;
  • the calculating and obtaining, according to the initial image, the target top view corresponding to the initial image through a nonlinear difference correction algorithm includes:
  • the target image including a top view of a region image that overlaps with the region where the target top view is located;
  • the feature points of each of the region images are matched to reconstruct the top view of the target.
  • calculating and obtaining a top view of the target corresponding to the initial image through a nonlinear difference correction algorithm according to the initial image further comprising:
  • a top view of the target corresponding to the initial image is constructed based on the target coordinate points.
  • the drivable area includes a drivable road and a drivable intersection
  • the generating a path trajectory corresponding to the vehicle driving state information based on the drivable area includes:
  • a path trajectory corresponding to the vehicle driving state information is generated based on the distribution of the drivable roads and the drivable intersections.
  • the method before the identifying the drivable roads, the drivable intersections in the drivable area, and the distribution of the drivable roads and the drivable intersections, the method further includes:
  • the drivable area is adjusted to reconstruct the drivable area.
  • the adjustment of the drivable area based on the scan area, and the reconstruction of the drivable area includes:
  • an erosion operation is performed on the expanded area to reconstruct the drivable area.
  • identifying drivable roads, drivable intersections in the drivable area, and distribution of the drivable roads and the drivable intersections includes:
  • the distribution of the drivable roads and the drivable intersections in the drivable area is determined based on the drivable area and the information of the drivable roads in the drivable area and the type of the drivable intersection.
  • the present invention also provides a path construction device, the device includes:
  • a first acquisition module configured to acquire, in real time, vehicle driving state information and an initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information when driving on a preset driving path;
  • a target top view acquisition module configured to calculate and obtain a target top view corresponding to the initial image through a nonlinear difference correction algorithm according to the initial image
  • a partition image acquisition module configured to input the top view of the target into a preset deep learning model, classify the pixel points of the top view of the target input into the preset deep learning model, and obtain a partition image, where the partition image includes drivable Area images and non-drivable area images;
  • an identification module for scanning the partition image to identify the drivable area of the vehicle
  • a path trajectory generation module configured to generate a path trajectory corresponding to the vehicle driving state information based on the drivable area, where the path trajectory is one of the preset driving paths.
  • the present invention also provides a path construction terminal, the terminal includes a processor and a memory, the memory stores at least one instruction or at least a piece of program, and the at least one instruction or the at least one piece of program is executed by the processor Load and execute to implement the path building method described above.
  • the present invention also provides a computer-readable storage medium, where at least one instruction or at least one piece of program is stored in the storage medium, and the at least one instruction or at least one piece of program is loaded by a processor and executed as described above Path building method.
  • the vehicle automatically learns to obtain the path trajectory during the process of driving on the preset driving path in the underground garage, so that the subsequent vehicle can automatically plan the path during the automatic driving process.
  • FIG. 1 is a schematic flowchart of a path construction method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of obtaining a vehicle driving strategy according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of obtaining the driving habits of a driver according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of acquiring a top view of a target according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a top view of acquiring an initial image according to an embodiment of the present invention.
  • FIG. 6 is another schematic flowchart of acquiring a top view of a target according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of obtaining an extreme point position according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of classifying pixels of an image according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a method for identifying a drivable road and a drivable intersection in a drivable area according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a recognition result of a drivable road and a drivable intersection in a drivable area provided by an embodiment of the present invention.
  • FIG. 11 is an effect diagram of a path trajectory fusion provided by an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a path construction apparatus provided by an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a path construction terminal according to an embodiment of the present invention.
  • the path construction method of the present application is applied to the field of automatic driving. Specifically, a human-driven vehicle is used to drive in an underground garage at least once, so that the vehicle can automatically learn the path of the underground garage, and then an abstract path map is established to facilitate subsequent vehicles. Autopilot according to the route map.
  • the following describes the path construction method of the present invention based on the above-mentioned system with reference to FIG. 1 , which can be applied to the path construction method of autonomous vehicles.
  • the present invention can be, but is not limited to, suitable for sealed scenarios, such as underground garages. Method of virtual map of underground garage.
  • FIG. 1 is a schematic flowchart of a path construction method provided by an embodiment of the present invention.
  • This specification provides the method operation steps as described in the embodiment or the flowchart, but is based on routines; or without creative work More or fewer operational steps may be included.
  • the sequence of steps enumerated in the embodiments is only one of the execution sequences of many steps, and does not represent a unique execution sequence.
  • the path construction method may be executed in the sequence of the methods shown in the embodiments or the accompanying drawings. Specifically as shown in Figure 1, the method includes:
  • the driver can manually drive the automatic driving vehicle to drive on the preset driving path
  • the preset driving path may be a drivable path that already exists in the preset driving area; for example, it may be at least one drivable road and a drivable intersection that already exist in an underground garage;
  • the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information can be acquired in real time through the front-view camera of the vehicle;
  • the initial image may be a two-dimensional image
  • the vehicle automatically acquires vehicle driving state information and an initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information in real time;
  • the real-time acquisition of the vehicle driving state information and the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information includes the following steps:
  • Step 1 acquiring in real time the vehicle driving state information during the vehicle running on the preset driving path, where the vehicle driving state information includes the driving strategy and the driver's driving habits during the vehicle traveling on the preset driving path;
  • FIG. 2 is a schematic flowchart of obtaining a vehicle driving strategy according to an embodiment of the present invention
  • the drivable area may include a drivable road and a drivable intersection
  • the vehicle speed and steering wheel angle can be obtained according to the vehicle controller area network (Controller Area Network, CAN) signal;
  • vehicle controller area network Controller Area Network, CAN
  • S203 determine the forward mileage and heading angle of the vehicle according to the speed of the vehicle and the steering wheel angle
  • the running time of the vehicle can also be obtained.
  • the forward mileage of the vehicle can be calculated according to the speed of the vehicle and the running time of the vehicle;
  • the heading angle of the vehicle can be calculated according to the steering wheel angle of the vehicle.
  • S205 Determine a driving strategy of the vehicle according to the forward mileage of the vehicle and the heading angle, where the driving strategy of the vehicle includes the forward mileage on the drivable road and whether to turn at the drivable intersection.
  • the driving trend of the vehicle may be determined according to the forward mileage of the vehicle and the heading angle;
  • the vehicle driving strategy may include driving data and driving requirements of the vehicle, such as the mileage on the drivable road and whether to turn at the drivable intersection.
  • the method of acquiring the vehicle driving strategy in the present application can accurately acquire the driving strategy of the vehicle when the vehicle is driving on the preset driving path, so as to facilitate the subsequent acquisition of the preset driving process of the vehicle when the vehicle is driving on the preset driving path according to the driving strategy of the vehicle.
  • FIG. 3 is a schematic flowchart of obtaining the driving habit of a driver according to an embodiment of the present invention
  • the drivable area may include a drivable road and a drivable intersection
  • the operation data of the vehicle during the driving process of the preset driving path may include the steering angle of the vehicle, the steering acceleration, the speed of the vehicle, the acceleration of the vehicle, the accelerator pedal, and the brake and other operation data of the vehicle;
  • the running data of the vehicle during the driving process of the preset driving path may also include driving video.
  • the driving trajectory of the vehicle may be determined according to the driving video.
  • a time window needs to be established, and the running data of the vehicle before and after the change of the running track of the vehicle is acquired within the time window;
  • the running data of the vehicle are also different;
  • the preprocessing of the running data of the vehicle may be the preprocessing of the running data of the vehicle obtained within the time window, specifically, the speed, acceleration, steering angle and steering acceleration of the vehicle may be preprocessed. data preprocessing;
  • the maximum value, minimum value and average value of data such as the speed, acceleration, steering angle and steering acceleration of the vehicle can be obtained respectively; specifically, the maximum value, minimum value and average value of each obtained operation data are the target operation data.
  • the target operation data is input into a recurrent neural network, and the feature of the target operation data is extracted from the recurrent neural network;
  • the features of the target operation data can be extracted
  • the features of the target operation data are extracted;
  • the driving habit includes the driving speed of the drivable road and the steering angle of the drivable intersection.
  • the control feature of the vehicle is preset according to the feature, and the control feature may include the driving speed of the drivable road and the steering angle of the drivable intersection;
  • the driving habits of the driver can be obtained according to the control characteristics of the vehicle during driving.
  • the driving habit of the driver can be obtained by effectively predicting the driving habit of the driver according to the operation data of the vehicle when the vehicle is driving on the preset driving path, so as to facilitate the subsequent acquisition of the driving habit of the vehicle according to the driving habit of the driver.
  • Step 2 according to the driving strategy and the driving habit of the driver, obtain in real time an initial image of the environment around the preset driving path during the process of the vehicle traveling on the preset driving path;
  • the obtained initial image corresponds to the driving strategy of the vehicle and the driving habits of the driver;
  • the number of initial images of the surrounding environment of the preset driving path obtained by the vehicle, as well as the viewing angle and pixels of the images are different.
  • FIG. 4 is a schematic flowchart of acquiring a top view of a target provided by an embodiment of the present invention. the details are as follows:
  • the target top view corresponding to the initial image is calculated and obtained by a nonlinear difference correction algorithm, including:
  • the method before acquiring the correspondence between the top view of the initial image and the initial image, the method further includes: acquiring the top view of the initial image;
  • FIG. 5 is a schematic diagram of a top view of acquiring an initial image
  • the specific algorithm for obtaining the correspondence between the top view and the initial image by setting the perspective matrix is as follows:
  • (x, y) are the coordinates of the point in the top view
  • (X/k, Y/k) are the coordinates of the corresponding point in the de-distorted image
  • the obtaining the correspondence between the top view of the initial image and the initial image based on the nonlinear difference correction algorithm includes:
  • the first pixel point of the top view obtained above, and find the second pixel point corresponding to the first pixel point in the pixel points of the initial image; the corresponding relationship between the first pixel point and the second pixel point can be the initial pixel point. the correspondence between the top view of the image and the initial image;
  • the correspondence between the top view of the initial image and the initial image may be directly obtained based on the nonlinear difference correction algorithm, and the correspondence may include all The coordinate point corresponding to the top view and the initial image; in this application, the target top view of the initial image can be quickly acquired.
  • the target coordinate point corresponding to the target top view can be directly found in the initial image based on the obtained corresponding relationship.
  • a top view of the target corresponding to the initial image can be directly constructed based on the obtained target coordinate points.
  • FIG. 6 is a schematic flowchart of another acquisition of a top view of a target provided by an embodiment of the present invention. the details are as follows:
  • S601 obtaining a target image based on the initial image, where the target image includes a top view of a region image that overlaps with the region where the target top view is located;
  • the vehicle can obtain several top-view images corresponding to the initial images during the driving process.
  • the same area may include multiple top-view images from different perspectives of the front-view camera;
  • the target image may be multiple An overhead image that simultaneously covers the same object or the same area in the overhead image (specifically, it may include an area image that overlaps with the area where the target overhead image is located);
  • the area image of the target image coincides with the top view of the target.
  • the target image may also appear multiple times
  • S605 determine whether the number of times the target image appears is greater than or equal to a preset second threshold
  • the number of times the target image appears may be greater than or equal to a preset second threshold; the preset second threshold may be 50 times;
  • the target image When the number of occurrences of the target image is less than the preset second threshold, the target image can be regarded as invalid; the use can be abandoned or re-acquired until the number of occurrences of the top-down image exceeds the preset second threshold;
  • the Gaussian algorithm can be used to extract the feature points of the regional image
  • Gaussian blurring can be performed on the target image first, and then the difference operator (DoG) can be obtained by subtracting different Gaussian blurring results:
  • (x, y), represents the spatial coordinates;
  • I(x, y), represents the pixel value at (x, y);
  • L(x, y, ⁇ ) represents the definition of the size space of the two-dimensional image
  • represents the smoothness parameter of the image
  • D(x, y, ⁇ ), represents the Gaussian difference scale space; k, represents the scale coefficient.
  • the method of re-positioning to determine the position of the feature point may include: performing curve fitting on the DoG function, and using Taylor series expansion to obtain the exact position;
  • the algorithm for obtaining the position of the feature point by using the Taylor series is as follows:
  • the size and direction information of the matching target can be obtained, and then the position of the real extreme point can be determined.
  • m(x, y) represents the gradient value at (x, y)
  • ⁇ (x,y) represents the gradient direction at (x,y)
  • L represents the scale space value of the coordinate position of the key point.
  • FIG. 7 is a schematic structural diagram of obtaining the position of the extreme point
  • a new target top view can be obtained by matching the feature points of each regional image according to the method of co-coordinates of image feature points; the target top view obtained by this method in this application is more accurate.
  • the deep learning model preset in this specification can be a fully convolutional network model
  • a preset deep learning model such as a fully convolutional network model, can accept input of images of any size, and then obtain upsampling of the same size of the input through deconvolution, that is, for each pixel sort;
  • the output result obtained is still an image; that is, the preset deep learning model only divides the input image to achieve pixel-level classification and obtain partitions. image;
  • FIG. 8 it is a schematic diagram of classifying pixels of an image
  • the result size of the first layer can be changed to 1/4 2 of the input
  • the result size of the second layer can be changed to 1/8 2 of the input
  • the result size of the fifth layer can be changed to The input 1/16 2
  • the result size of the eighth layer can be changed to 1/32 2 of the input.
  • the obtained partition image may include an image of a drivable area and an image of a non-drivable area
  • the partition image may be an image obtained by partitioning the top view of the target
  • the partition image may include a drivable area of the vehicle, for example:
  • Information such as driving roads and drivable intersections; images of non-drivable areas may include information such as parking space lines and parking space areas.
  • each area information in the partition image is scanned to determine the drivable area of the vehicle; specifically, the drivable area includes drivable roads and drivable intersections;
  • the straight road trend identification module may be used to identify drivable roads in the drivable area
  • the intersection trend identification module may be used to identify drivable intersections in the drivable area
  • the adjustment of the drivable area based on the scan area, and the reconstruction of the drivable area includes the steps of:
  • the size of the grid can be selected according to the actual situation during design, and the application adopts the operation of first expansion and then erosion on the image, which can effectively remove pixels missing or not connected to the main body in the recognition result. situation; make the obtained drivable area more precise;
  • the preset driving path includes at least one drivable path, specifically, may include multiple drivable paths; the above-generated path trajectory may be one of the multiple drivable paths.
  • the generating a path trajectory corresponding to the vehicle driving state information based on the drivable area may include the following steps:
  • Step 1 Based on the drivable area, determine drivable roads, drivable intersections in the drivable area, and the distribution of the drivable roads and the drivable intersections;
  • FIG. 9 it is a schematic flowchart of a method for identifying a drivable road and a drivable intersection in a drivable area; specifically, as follows:
  • S901 input the drivable area into a road recognition model, and identify the drivable road in the drivable area and the information of the drivable road, and the information of the drivable road includes the width and length of the drivable road;
  • the road recognition model may be a road recognition algorithm that recognizes road lines on the road and provides location information of road markings;
  • it may be a road straight trend identification algorithm to identify information such as drivable straights in the drivable area.
  • a specific method for identifying a drivable road in a drivable area may include the following steps:
  • the road recognition result of size m*n is longitudinally projected to obtain the number of road pixels h i in each column of pixels:
  • n the width of the image
  • w h represents the number of occurrences of different h values.
  • the value range of w h is [0,n].
  • w h max the size of h at this time is recorded as h max , which is to satisfy the threshold for “column” to become a road.
  • the maximum value i max and the minimum value i min of i are obtained, that is, the column positions of both sides of the road in the image, that is, the width of the road.
  • S903 input the drivable area into an intersection identification model, and identify the drivable intersection in the drivable area and the type of the drivable intersection;
  • the intersection identification model may be an algorithm for identifying intersections in a road, and specifically, it may identify information such as whether a drivable intersection appears on a drivable road and what kind of intersection it is.
  • the distribution of drivable roads and drivable intersections in the drivable area can be accurately determined.
  • the drivable intersection and drivable road in the drivable area may also be determined by the mileage and heading angle of the vehicle;
  • the method for identifying drivable intersections and drivable roads in a drivable area includes the following steps:
  • the vehicle speed and steering wheel angle can be obtained according to the CAN signal
  • the running time of the vehicle can also be obtained.
  • the mileage of the vehicle can be calculated according to the speed of the vehicle and the running time of the vehicle;
  • the heading angle of the vehicle can be calculated according to the steering wheel angle of the vehicle.
  • the 5th column or the (n-6)th column of pixels is selected according to the left turn or the right turn:
  • p(i) When p(i) is 0 or 1, it means whether it is a road, and the width of the road is judged;
  • p i indicating whether the i-th row in the 5th column or the (n-6)th column is a road pixel
  • p(i) is 0 or 1, which means whether it is a road
  • Pi represents the relationship between row i and row i-1
  • ⁇ (t) represents the deflection angle of the vehicle at time t
  • drivable roads and drivable intersections in the drivable area can also be identified, and a specific schematic diagram is shown in FIG. 10 .
  • Step 2 generating a path trajectory corresponding to the vehicle driving state information based on the distribution of the drivable roads and the drivable intersections;
  • the driving route of the vehicle may be determined based on the distribution of the drivable roads and the drivable intersections in the drivable area;
  • the path trajectory corresponding to the vehicle driving status information is generated; the present application can accurately obtain the path trajectory corresponding to the vehicle driving status information by using this method.
  • a map corresponding to the preset driving path is constructed based on the path trajectory.
  • the map may be an underground garage map; specifically, according to the generated path track, a track abstraction algorithm may be used to process the path track, and then an underground garage map may be constructed; an underground garage map constructed based on this method It is an abstract path map; in this application, the map can be applied to any scene, without the need for automatic driving of the underground garage of the field equipment; and the underground garage map is a path planning map that conforms to the driving habits of drivers.
  • the following methods can be used to process the path trajectory: specifically including:
  • Roads are often composed of the following five types: starting point, straight road, intersection, dead end and end point, among which intersections are divided into crossroads and T-junctions.
  • the current driving direction is the reference direction.
  • intersection structure including four parameters: intersection number Node, mileage Dist, intersection turning information TurnINF and corner Angle, where mileage Dist is the distance between the location and the starting point.
  • a separate driving flag PassFlag is also set up, in which "0" means to continue driving, "1" is a dead end, and it is forbidden to go forward.
  • the table shows an intersection, and the default angle is 90 degrees to the left.
  • TurnINF is set to 1
  • a top view of the target corresponding to the initial image is obtained by calculating through a nonlinear difference correction algorithm
  • the current path trajectory corresponding to the vehicle driving state information can be obtained by adopting the same method for obtaining the path trajectory;
  • the method further includes:
  • the current path trajectory and the spatial position of the same point in the previously obtained path trajectory are corresponded to facilitate information fusion; a new path trajectory is obtained, and this method is used to verify the path trajectory, To ensure a more accurate abstract map, such as the lower garage map.
  • the preset driving path can be repeatedly driven; at least three path trajectories are obtained;
  • Multi-track fusion is performed on the current path track obtained each time with the path track obtained in the previous time or a new path track to obtain a more accurate map.
  • the least squares method can be used to solve the multi-trajectory fusion; the details are as follows:
  • the driving trajectory learned for the first time is used as the reference point set X
  • the driving trajectory learned for the second time is used as the point set P to be fused.
  • the reference point set X and the point set P to be fused are:
  • E(R, t) represents the error function
  • R represents the rotation matrix
  • N t represents the translation matrix
  • N p represents the number of elements in the point set P.
  • centroid of the reference point set X and the point set P to be fused is:
  • ⁇ x represents the centroid of the reference point set X
  • ⁇ p represents the centroid of the point set P to be fused
  • N p represents the number of elements in the point set P to be fused.
  • X′ represents the set composed of the deviation of each element in the reference point set X from the centroid
  • P' represents the set consisting of the deviation of each element in the point set P to be fused from the centroid.
  • W represents the real matrix to be decomposed
  • pi ′ T represents the transpose of pi ′
  • U and V are unit orthogonal matrices, called left and right singular matrices respectively;
  • V T represents the transpose of V, ⁇ 1 , ⁇ 2 , ⁇ 3 — singular values.
  • the method before performing multi-track fusion on the current path trajectory and the previously obtained path trajectory and reconstructing the underground garage map, the method further includes:
  • the preset first threshold may be 95%
  • the current path trajectory and the previously obtained path trajectory may be fused .
  • the generation process of the current path trajectory and the previously obtained path trajectory can be chosen to be abandoned , a path trajectory with a smaller number of target top-down views obtained;
  • the matching degree between the current path trajectory and the preset driving path can be used to determine a path trajectory with a smaller number of target top views obtained during the generation process of the current path trajectory and the previously obtained path trajectory;
  • the current path is abandoned track, and drive on the preset driving path again to obtain a new current path track again; so as to use the new current path track and the previously obtained path track to perform multi-track fusion and reconstruct the path track later;
  • the map corresponding to the preset driving path may be reconstructed subsequently according to the reconstructed path trajectory.
  • the previous acquisition is discarded.
  • the path and trajectory of the vehicle is driven on the preset driving path again to obtain a new current path and trajectory again; so as to facilitate the subsequent use of the new current path and trajectory and the previously obtained current path and trajectory for multi-trajectory fusion, re- construct path trajectory;
  • the map corresponding to the preset driving path may be reconstructed subsequently according to the reconstructed path trajectory.
  • the path trajectory obtained by the above method in the present application is closer to the actual driving trajectory; it can not only improve the smoothness of vehicle control in automatic driving, but also reduce the risk of the vehicle deviating from the predetermined trajectory.
  • the embodiment of the present invention acquires the vehicle driving state information and the preset corresponding to the vehicle driving state information in real time.
  • An initial image of the surrounding environment of the driving path is set; according to the initial image, a target top view corresponding to the initial image is calculated by a nonlinear difference correction algorithm; the target top view is input into a preset deep learning model, and the input pre- Classifying the pixel points of the target top view of the set deep learning model to obtain a partition image, the partition image includes a drivable area image and a non-drivable area image; Scan the partition image to identify the drivable area of the vehicle; A path trajectory corresponding to the vehicle driving state information is generated based on the drivable area, where the path trajectory is one of the preset driving paths; using the technical solutions provided in the embodiments of this specification, the vehicle drives through the preset During the process of driving on the path, automatic learning is performed to obtain the path trajectory, so that the subsequent vehicles can automatically plan the path during the automatic driving process.
  • FIG. 12 is a schematic structural diagram of a path construction apparatus provided by an embodiment of the present invention. specifically, the apparatus includes:
  • the first acquisition module 110 is configured to acquire, in real time, vehicle driving state information and an initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information when driving on a preset driving path;
  • the target top view acquisition module 120 is configured to obtain, according to the initial image, a target top view corresponding to the initial image through a nonlinear difference correction algorithm;
  • the partition image acquisition module 130 is configured to input the top view of the target into a preset deep learning model, classify the pixels of the top view of the target input into the preset deep learning model, and obtain a partition image, where the partition image includes a Driving area images and non-driving area images;
  • an identification module 140 configured to scan the partition image to identify the drivable area of the vehicle
  • the path trajectory generation module 150 is configured to generate a path trajectory corresponding to the vehicle driving state information based on the drivable area, where the path trajectory is one of the preset driving paths.
  • a map construction module is further included, configured to construct a map corresponding to the preset driving path based on the path trajectory.
  • the first obtaining module 110 includes:
  • a first acquiring unit configured to acquire, in real time, vehicle driving state information during the vehicle running on the preset driving path, where the vehicle driving state information includes the driving strategy and the driver's driving strategy during the vehicle traveling on the preset driving path driving habits;
  • the second acquiring unit is configured to acquire, in real time, an initial image of the surrounding environment of the preset driving path during the process of the vehicle traveling on the preset driving path according to the driving strategy and the driving habit of the driver.
  • the first obtaining unit includes:
  • a first acquisition subunit used to acquire the vehicle speed and steering wheel angle in real time
  • a first determination subunit configured to determine the forward mileage and heading angle of the vehicle according to the speed of the vehicle and the steering wheel angle
  • a second determination subunit configured to determine a driving strategy of the vehicle according to the mileage of the vehicle and the heading angle, where the driving strategy of the vehicle includes the mileage on the drivable road and the drivable intersection whether to turn.
  • the first obtaining unit further includes:
  • a second acquisition subunit configured to acquire in real time the running data of the vehicle during the driving process of the preset driving route
  • a third acquiring subunit configured to preprocess the running data of the vehicle to acquire target running data
  • a feature extraction subunit used for inputting the target operating data into a recurrent neural network, and extracting features of the target operating data from the recurrent neural network;
  • the driving habit determination subunit of the driver is used to input the feature into the fully connected network, and predict the driving habit of the driver during the driving process of the vehicle on the preset driving path; the driving habit includes the driving speed of the drivable road and The steering angle of the drivable intersection.
  • the second acquisition module is configured to acquire, in real time, the vehicle driving state information and the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information when driving on the preset driving path again;
  • a target top view acquisition module configured to calculate and obtain a target top view corresponding to the initial image through a nonlinear difference correction algorithm according to the initial image
  • a partition image acquisition module configured to input the top view of the target into a preset deep learning model, classify the pixel points of the top view of the target input into the preset deep learning model, and obtain a partition image, where the partition image includes drivable Area images and non-drivable area images;
  • an identification module for scanning the partition image to identify the drivable area of the vehicle
  • a current path trajectory generation module configured to generate a current path trajectory corresponding to the vehicle driving state information based on the drivable area, where the path trajectory is one of the preset driving paths;
  • a map reconstruction module configured to perform multi-trajectory fusion on the current path trajectory and the previously obtained path trajectory, and reconstruct the underground garage map.
  • a coincidence degree judgment module used for judging whether the coincidence degree of the current path trajectory and the previously obtained path trajectory is greater than or equal to a preset first threshold
  • a trajectory fusion module configured to fuse the current path trajectory with the previously obtained path trajectory if the coincidence degree of the current path trajectory and the previously obtained path trajectory is greater than or equal to a preset first threshold .
  • a matching degree judgment module used for judging whether the matching degree of the current path trajectory and the preset driving path if the coincidence degree of the current path trajectory and the previously obtained path trajectory is less than a preset first threshold is less than the matching degree of the previously obtained path trajectory and the preset driving path;
  • a current path trajectory reconstruction module configured to regenerate the The current path track.
  • the target top view acquisition module 120 includes:
  • a target image acquisition unit configured to obtain a target image based on the initial image, where the target image includes a top view of a region image that overlaps with the region where the target top view is located;
  • a number acquisition unit used to acquire the number of times the target image appears
  • a judgment unit configured to judge whether the number of times the target image appears is greater than or equal to a preset second threshold
  • a feature point extraction unit configured to extract the feature points of the region images in each of the target images if so;
  • the feature point matching unit is used for matching the feature points of each of the regional images to reconstruct the top view of the target.
  • the target top view acquisition module 120 further includes:
  • a corresponding relationship obtaining unit configured to obtain a corresponding relationship between the top view of the initial image and the initial image based on the nonlinear difference correction algorithm, where the corresponding relationship includes the top view of the mission image and the initial image Corresponding coordinate points between images;
  • a target coordinate point acquiring unit configured to acquire a target coordinate point from the initial image based on the corresponding relationship
  • a target top view construction unit configured to construct a target top view corresponding to the initial image based on the target coordinate points.
  • the path trajectory generation module 150 includes:
  • a first determining unit configured to determine, based on the drivable area, drivable roads, drivable intersections in the drivable area, and distribution of the drivable roads and the drivable intersections;
  • a path trajectory generating unit configured to generate a path trajectory corresponding to the vehicle driving state information based on the distribution of the drivable roads and the drivable intersections.
  • a scanning unit configured to scan the partition image by using a grid of preset size to obtain the drivable area and the scanning area of the vehicle;
  • An adjustment unit configured to adjust the drivable area based on the scan area, and reconstruct the drivable area.
  • the adjustment unit includes:
  • a first adjustment subunit configured to perform an expansion operation on the drivable area based on the scanning area to obtain an expanded area
  • the second adjustment sub-unit is configured to perform an erosion operation on the expansion area based on the scanning area to reconstruct the drivable area.
  • the first determining unit includes:
  • the first identification subunit is used to input the drivable area into a road recognition model, and identify the drivable road in the drivable area and the information of the drivable road, and the information of the drivable road includes the information of the drivable road. width and length;
  • a second identification subunit configured to input the drivable area into an intersection identification model, and identify the drivable intersection in the drivable area and the type of the drivable intersection;
  • a third determination subunit configured to determine the drivable road and the drivable road in the drivable area based on the information of the drivable area and the drivable road in the drivable area and the type of the drivable intersection. The distribution of driving intersections.
  • An embodiment of the present invention provides a path construction terminal, the terminal includes a processor and a memory, and the memory stores at least one instruction or at least one piece of program, and the at least one instruction or the at least one piece of program is processed by the The loader is loaded and executed to implement the path construction method described in the above method embodiment.
  • the memory can be used to store software programs and modules, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory.
  • the memory may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, application programs required for functions, etc.; the stored data area may store data created according to the use of the device, and the like.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide processor access to the memory.
  • FIG. 13 is a schematic structural diagram of a path construction terminal according to an embodiment of the present invention.
  • the internal structure of the path construction terminal may include, but is not limited to, a processor, a network interface, and a memory, wherein a processor, a network interface in the path construction terminal and the memory can be connected through a bus or other means, and in FIG. 13 shown in the embodiment of this specification, the connection through a bus is taken as an example.
  • the processor (or called CPU (Central Processing Unit, central processing unit)) is the computing core and the control core of the path construction terminal.
  • Optional network interfaces may include standard wired interfaces, wireless interfaces (such as WI-FI, mobile communication interfaces, etc.).
  • Memory is a memory device in the path construction terminal, used to store programs and data. It can be understood that the memory here can be a high-speed RAM storage device, or a non-volatile storage device (non-volatile memory), such as at least one disk storage device; optionally, at least one storage device located far away from the aforementioned processing can also be used. storage device of the device.
  • the memory provides storage space, and the storage space stores the operating system of the path construction terminal, which may include but not limited to: Windows system (an operating system), Linux (an operating system), etc., which is not limited in the present invention;
  • the operating system of the path construction terminal which may include but not limited to: Windows system (an operating system), Linux (an operating system), etc., which is not limited in the present invention;
  • one or more instructions suitable for being loaded and executed by the processor are also stored in the storage space, and these instructions may be one or more computer programs (including program codes).
  • the processor loads and executes one or more instructions stored in the memory to implement the path construction method provided by the above method embodiments.
  • Embodiments of the present invention further provide a computer-readable storage medium, where the storage medium can be set in a path construction terminal to store at least one instruction related to implementing a path construction method in the method embodiments, at least one instruction A piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the code set or the instruction set can be loaded and executed by the processor of the electronic device to implement the path construction method provided by the above method embodiments.
  • the above-mentioned storage medium may include but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media that can store program codes, such as a disc or an optical disc.

Abstract

一种路径构建方法、装置、终端及存储介质,所述的方法包括:在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;根据初始图像,通过非线性差值校正算法计算得到与初始图像对应的目标俯视图;将目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的目标俯视图的像素点进行分类,得到分区图像,分区图像为对目标俯视图进行分区后的图像;扫描分区图像,识别出车辆的可行驶区域;基于可行驶区域生成与车辆行驶状态信息对应的路径轨迹,路径轨迹为预设驾驶路径中的一条路径;车辆通过自动学习获得路径轨迹,便于后续车辆在自动驾驶过程中自动规划路径。

Description

一种路径构建方法、装置、终端及存储介质 技术领域
本发明涉及自动驾驶车辆自学习技术领域,尤其涉及一种路径构建方法、装置、终端及存储介质。
背景技术
伴随着汽车智能化的发展,自动驾驶距离我们越来越近,其中,“最后一公里”作为自动驾驶的最后一环,处在低速且相对封闭场景,行车风险比较低,给用户带来的便捷和舒适度较高,极有可能会提前到来。
目前,很多在地下车库中实现自动找寻车位并泊车的方案提前载入车库地图,这种方案需要提前录制车库地图,只有制作足够多的车库地图才能大范围使用,前期成本较大;另一种方案是,通过车库停车诱导和指引系统来提供停车路径规划,这种方案同样需要在大量的车库安装识别和通讯设备,且需要对设备进行维护和升级,需要较大的人力物力开支。
发明内容
为了解决上述技术问题,针对以上问题点,本发明公开了路径构建方法,车辆通过在地下车库中预设驾驶路径上行驶的过程中,自动学习以获得路径轨迹,以便于后续车辆在自动驾驶过程中自动规划路径。
为了达到上述发明目的,本发明提供了一种路径构建方法,所述的方法包括:
所述的方法包括:
在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
将所述目标俯视图输入预设的深度学习模型,对所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
扫描所述分区图像,识别出车辆的可行驶区域;
基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
在一个实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:
基于所述路径轨迹构建与所述预设驾驶路径对应的地图。
在一个实施例中,所述实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像包括:
实时获取车辆在预设驾驶路径上行驶过程中的车辆行驶状态信息,所述车辆行驶状态信息包括车辆在所述预设驾驶路径上行驶过程中的行驶策略和驾驶员的驾驶习惯;
根据所述行驶策略和所述驾驶员的驾驶习惯,实时获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像。
在一个实施例中,所述可行驶区域包括可行驶道路和可行驶路口,获取车辆在所述预设驾驶路径上行驶过程中的行驶策略,包括:
实时获取车辆的车速和方向盘转角;
根据所述车辆的车速和所述方向盘转角,确定车辆的前进里程和航向角;
根据所述车辆的前进里程和所述航向角确定车辆的行驶策略,所述车辆的行驶策略包括在所述可行驶道路上的前进里程以及在所述可行驶路口是否转弯。
在一个实施例中,所述可行驶区域包括可行驶道路和可行驶路口;获取车辆在预设驾驶路径上行驶过程中驾驶员的驾驶习惯包括:
实时获取车辆在预设驾驶路径的行驶过程中的运行数据;
对所述车辆的运行数据进行预处理,获取目标运行数据;
将所述目标运行数据输入循环神经网络,并从所述循环神经网络提取 所述目标运行数据的特征;
将所述特征输入全连接网络,预测得到车辆在预设驾驶路径的行驶过程中驾驶员的驾驶习惯;所述驾驶习惯包括可行驶道路的行驶速度和可行驶路口的转向角度。
在一个实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:再次在预设驾驶路径上行驶时,
实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
扫描所述分区图像,识别出车辆的可行驶区域;
基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
在一个实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹之后还包括:
对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图。
在一个实施例中,所述对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图之前,还包括:
判断所述当前路径轨迹与前次获得的所述路径轨迹的重合度是否大于等于预设第一阈值;
若所述当前路径轨迹与前次获得的所述路径轨迹的重合度大于等于预设第一阈值,则将所述当前路径轨迹与前次获得的所述路径轨迹进行融合。
在一个实施例中,还包括:
若所述当前路径轨迹与前次获得的所述路径轨迹的重合度小于预设第一阈值,则判断所述当前路径轨迹与所述预设驾驶路径的匹配度是否小于 前次获得的所述路径轨迹与所述预设驾驶路径的匹配度;
若是,则重新生成所述当前路径轨迹。
在一个实施例中,所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图包括:
基于所述初始图像获得目标图像,所述目标图像包含与所述目标俯视图所在区域重合的区域图像的俯视图;
获取所述目标图像出现的次数;
判断所述目标图像出现的次数是否大于等于预设第二阈值;
若是,则提取每个所述目标图像中所述区域图像的特征点;
将每个所述区域图像的特征点进行匹配,重构所述目标俯视图。
在一个实施例中,所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图,还包括:
基于所述非线性差值校正算法获取所述初始图像的俯视图与所述初始图像之间的对应关系,所述对应关系包括所述初始图像的俯视图和所述初始图像之间对应的坐标点;
基于所述对应关系从所述初始图像中获取目标坐标点;
基于所述目标坐标点构建与所述初始图像对应的目标俯视图。
在一个实施例中,所述可行驶区域包括可行驶道路和可行驶路口,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹包括:
基于所述可行驶区域,确定所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布;
基于所述可行驶道路和所述可行驶路口的分布生成与所述车辆行驶状态信息对应的路径轨迹。
在一个实施例中,所述识别所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布之前,还包括:
采用预设尺寸的方格对所述分区图像进行扫描,得到车辆的可行驶区域和扫描区域;
基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域。
在一个实施例中,所述基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域,包括:
基于所述扫描区域,对所述可行驶区域进行膨胀操作,得到膨胀区域;
基于所述扫描区域,对所述膨胀区域进行腐蚀操作,重构所述可行驶区域。
在一个实施例中,识别所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布包括:
将所述可行驶区域输入道路识别模型,识别出所述可行驶区域中的可行驶道路以及可行驶道路的信息,所述可行驶道路的信息包括可行驶道路的宽度和长度;
将所述可行驶区域输入路口识别模型,识别出所述可行驶区域中的可行驶路口以及可行驶路口的类型;
基于所述可行驶区域以及所述可行驶区域中的可行驶道路的信息和可行驶路口的类型,确定所述可行驶区域中所述可行驶道路和所述可行驶路口的分布。
本发明还提供了一种路径构建装置,所述的装置包括:
第一获取模块,用于在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
目标俯视图获取模块,用于根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
分区图像获取模块,用于将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
识别模块,用于扫描所述分区图像,识别出车辆的可行驶区域;
路径轨迹生成模块,用于基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
本发明还提供了一种路径构建终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令或至少一段程序,所述至少一条指令或 所述至少一段程序由所述处理器加载并执行以实现如上述所述的路径构建方法。
本发明还提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令或至少一段程序,所述至少一条指令或所述至少一段程序由处理器加载并执行如上述所述的路径构建方法。
实施本发明实施例,具有如下有益效果:
本发明公开的路径构建方法,车辆通过在地下车库中预设驾驶路径上行驶的过程中,自动学习以获得路径轨迹,以便于后续车辆在自动驾驶过程中自动规划路径。
附图说明
为了更清楚地说明本发明所述的路径构建方法、装置、系统及终端,下面将对实施例所需要的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它附图。
图1为本发明实施例提供的一种路径构建方法的流程示意图;
图2为本发明实施例提供的一种获取车辆行驶策略的流程示意图;
图3为本发明实施例提供的一种获取驾驶员的驾驶习惯的流程示意图;
图4为本发明实施例提供的一种获取目标俯视图的流程示意图;
图5为本发明实施例提供的一种获取初始图像的俯视图的示意图;
图6为本发明实施例提供的另一种获取目标俯视图的流程示意图;
图7为本发明实施例提供的一种获取极值点位置的结构示意图;
图8为本发明实施例提供的一种对图像的像素进行分类的示意图;
图9为本发明实施例提供的一种对可行驶区域的可行驶道路和可行驶路口的识别方法的流程示意图;
图10为本发明实施例提供的一种可行驶区域的可行驶道路和可行驶路口的识别结果示意图;
图11为本发明实施例提供的一种路径轨迹融合的效果图;
图12为本发明实施例提供的一种路径构建装置的结构示意图;
图13为本发明实施例提供的一种路径构建终端的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或服务器不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请的路径构建方法,应用于自动驾驶领域,具体的,采用人为驾驶车辆在地下车库中行驶至少一次,使得车辆对地下车库的路径进行自动学习,进而建立抽象的路径地图,以便于后续车辆根据路径地图进行自动驾驶。
以下结合图1介绍本发明基于上述系统的路径构建方法,可以应用于自动驾驶车辆的路径构建方法,本发明可以但不限于适用于密封场景,如地下车库,是一种基于路径自动学习以构建地下车库的虚拟地图的方法。
请参考图1,其所示为本发明实施例提供的一种路径构建方法的流程示意图,本说明书提供了如实施例或流程图所述的方法操作步骤,但基于常规;或者无创造性的劳动可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序,路径构建方法,可以按照实施例或附图所示的方法顺序执行。具体 的如图1所示,所述方法包括:
S101,在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
需要说明的是,在本说明书实施例中,可以通过驾驶员人为驾驶自动驾驶车辆在预设驾驶路径上行驶;
其中,预设驾驶路径可以是在预设驾驶区域中已经存在的可以行驶的路径;例如可以是地下车库中已经存在的至少一条可行驶道路和可行驶路口;
在本说明书实施例中,可以通过车辆的前视摄像头,实时获取车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
具体的,初始图像可以为二维图像;
具体的,当驾驶员驾驶车辆在预设驾驶路径上行驶时,车辆自动实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
在本说明书实施例中,所述实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像包括以下步骤:
步骤1,实时获取车辆在预设驾驶路径上行驶过程中的车辆行驶状态信息,所述车辆行驶状态信息包括车辆在所述预设驾驶路径上行驶过程中的行驶策略和驾驶员的驾驶习惯;
具体的,在本说明书实施例中,参考图2,其所示为本发明实施例提供的一种获取车辆行驶策略的流程示意图;
在本说明书实施例中,具体的,在获取车辆行驶策略时,所述可行驶区域可以包括可行驶道路和可行驶路口;
S201,实时获取车辆的车速和方向盘转角;
在本说明书实施例中,可以根据车辆控制器局域网络(Controller Area Network,CAN)信号获取车辆的车速和方向盘转角;
S203,根据所述车辆的车速和所述方向盘转角,确定车辆的前进里程和航向角;
在本说明书实施例中,还可以获取车辆的运行时间,具体的,可以根 据车辆的车速和车辆的运行时间计算得到车辆的前进里程;可以根据车辆的方向盘转角计算得到车辆的航向角。
S205,根据所述车辆的前进里程和所述航向角确定车辆的行驶策略,所述车辆的行驶策略包括在所述可行驶道路上的前进里程以及在所述可行驶路口是否转弯。
在本说明书实施例中,可以根据所述车辆的前进里程和所述航向角确定车辆的行驶趋势,也即是车辆的行驶策略;
具体的,车辆行驶策略可以包括在所述可行驶道路上的前进里程以及在所述可行驶路口是否转弯等车辆的行驶数据及行驶需求。
本申请中车辆行驶策略的获取方式可以精确的获取到车辆在预设驾驶路径上行驶时的行驶策略,以便于后续根据车辆的行驶策略获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像。
具体的,在本说明书实施例中,参考图3,其所示为本发明实施例提供的一种获取驾驶员的驾驶习惯的流程示意图;
在本说明书实施例中,具体的,在获取驾驶员的驾驶习惯时,所述可行驶区域可以包括可行驶道路和可行驶路口;
S301,实时获取车辆在预设驾驶路径的行驶过程中的运行数据;
在本说明书实施例中,车辆在预设驾驶路径的行驶过程中的运行数据可以包括车辆的转向角、转向加速度、车辆的速度、车辆的加速度、油门踏板以及刹车等车辆的运行数据;
车辆在预设驾驶路径的行驶过程中的运行数据还可以包括行车视频,具体的,可以根据行车视频判断车辆的行驶轨迹。
具体的,在获取车辆的运行数据时,需要建立一个时间窗口,在时间窗口内获取车辆行驶轨迹变化前后车辆的运行数据;
具体的,在不同的时间窗口内,车辆的运行数据也不相同;
S303,对所述车辆的运行数据进行预处理,获取目标运行数据;
在本说明书实施例中,对所述车辆的运行数据进行预处理可以是对时间窗口内获得的车辆运行数据进行预处理,具体的,可以是对车辆的速度、加速度、转向角以及转向加速度等数据进行预处理;
具体的,可以分别取车辆的速度、加速度、转向角以及转向加速度等数据的最大值、最小值和平均值;具体的,获得的各个运行数据的最大值、最小值和平均值即为目标运行数据。
S305,将所述目标运行数据输入循环神经网络,并从所述循环神经网络提取所述目标运行数据的特征;
在本说明书实施例中,在获取到目标运行数据之后可以提取目标运行数据的特征;
具体的,通过将所述目标运行数据输入循环神经网络,并从所述循环神经网络中通过序列到序列模型结构(seq2seq),提取所述目标运行数据的特征;
S307,将所述特征输入全连接网络,预测得到车辆在预设驾驶路径的行驶过程中驾驶员的驾驶习惯;所述驾驶习惯包括可行驶道路的行驶速度和可行驶路口的转向角度。
在本说明书实施例中,在提取到目标运行数据的特征之后,根据该特征预设车辆的控制特征,所述控制特征可以包括可行驶道路的行驶速度和可行驶路口的转向角度;
具体的,根据车辆在行驶过程中的控制特征可以获得驾驶员的驾驶习惯。
本申请中驾驶员的驾驶习惯的获取方式可以根据车辆在预设驾驶路径上行驶时的运行数据,有效地预测到驾驶员的驾驶习惯,以便于后续根据驾驶员的驾驶习惯获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像。
步骤2,根据所述行驶策略和所述驾驶员的驾驶习惯,实时获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像;
在本说明书实施例中,车辆的前视摄像头在获取预设驾驶路径周围环境的初始图像的过程中,得到的初始图像与车辆的行驶策略和驾驶员的驾驶习惯相对应;
当车辆的行驶策略和/或驾驶员的驾驶习惯发生变化时,其获得的预设驾驶路径周围环境的初始图像的数量以及图像的视角及像素均不相同。
S103,根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
在本说明书实施例中,参考图4,其所示为本发明实施例提供的一种获取目标俯视图的流程示意图;具体的如下:
所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图包括:
S401,基于所述非线性差值校正算法获取所述初始图像的俯视图与所述初始图像之间的对应关系,所述对应关系包括所述初始图像的俯视图和所述初始图像之间对应的坐标点;
在本说明书实施例中,在获取初始图像的俯视图与初始图像之间的对应关系之前还包括:获取初始图像的俯视图;
如图5,其所示为一种获取初始图像的俯视图的示意图;
具体的获取初始图像的俯视图与所述初始图像之间的对应关系的步骤如下:
对所述初始图像进行去畸变操作,以获得去畸变图像;
在所述去畸变图像中选取四个点,确定与上述四个点对应的初始图像的俯视视角的俯视图。
识别所述俯视图中每个点的坐标,通过透视矩阵求出初始图像中对应的坐标,进而获得初始图像的俯视图与所述初始图像之间的对应关系;
具体的,对应关系可以为h(m,n)=f(i,j)。
具体的,通过设置透视矩阵获得俯视图与所述初始图像之间的对应关系的具体算法如下:
设透视矩阵为M,则透视变换方程为:P=M·p
其中,
Figure PCTCN2020137305-appb-000001
(x,y)为俯视图中点的坐标,
Figure PCTCN2020137305-appb-000002
(X/k,Y/k)为对应的去畸变后图像中点的坐标,
k=m 20*x+m 21*y+m 22,将四组点带入,求出透视矩阵M。遍历识别俯视图中每个点的坐标,通过透视矩阵M求出对应的点的坐标,再根 据h(m,n)=f(i,j),得到初始图像坐标的像素信息。
在本说明书实施例中,所述基于所述非线性差值校正算法获取所述初始图像的俯视图与所述初始图像之间的对应关系,包括:
获取上述获得的俯视图的第一像素点,在初始图像的像素点中查找与第一像素点对应的第二像素点;第一像素点与第二像素点之间的对应的关系即可以是初始图像的俯视图与所述初始图像之间的对应关系;
具体的,在本说明书实施例中,后续在使用时,可以基于非线性差值校正算法直接获取到所述初始图像的俯视图与所述初始图像之间的对应关系,所述对应关系可以包括所述俯视图与所述初始图像对应的坐标点;本申请中采用这种方式可以快速获取初始图像的目标俯视图。
S403,基于所述对应关系从所述初始图像中获取目标坐标点;
在本说明书实施例中,根据获得的初始图像,基于上述获得的对应关系可以直接在初始图像中查找到目标俯视图对应的目标坐标点。
S405,基于所述目标坐标点构建与所述初始图像对应的目标俯视图。
在本说明书实施例中,基于获得的目标坐标点即可直接构建出与初始图像对应的目标俯视图。
在本说明书实施例中,参考图6,其所示为本发明实施例提供的另一种获取目标俯视图的流程示意图;具体的如下:
S601,基于所述初始图像获得目标图像,所述目标图像包含与所述目标俯视图所在区域重合的区域图像的俯视图;
在本说明书实施例中,车辆在行驶过程中可以获得若干初始图像对应的俯视图像,车辆在行驶过程中,同一区域在前视摄像头不同视角下可以包括多个俯视图像;目标图像可以是多个俯视图像中同时涵盖同一物体或同一区域的俯视图像(具体的可以为包括与目标俯视图所在区域重合的区域图像);
具体的,所述目标图像的区域图像与所述目标俯视图重合。
S603,获取所述目标图像出现的次数;
在本说明书实施例中,因车辆在行驶过程中会对某一区域或某一图像获取多次图像,因此目标图像也可以出现多次;
S605,判断所述目标图像出现的次数是否大于等于预设第二阈值;
在本说明书实施例中,目标图像出现的次数可以为大于等于预设第二阈值;预设第二阈值可以是50次;
当目标图像出现的次数小于预设第二阈值时,可以视为该目标图像无效;可以放弃使用或者重新获取直至标俯视图像出现的次数超过预设第二阈值;
S607,若是,则提取每个所述目标图像中所述区域图像的特征点;
在本说明书实施例中,可以采用高斯算法提取区域图像的特征点;
具体的,可以先对目标图像进行高斯模糊,再将不同的高斯模糊结果相减得到差分算子(DoG):
具体的,提取特征点的具体算法如下:L(x,y,σ)=G(x,y,σ)·I(x,y)
Figure PCTCN2020137305-appb-000003
Figure PCTCN2020137305-appb-000004
其中,(x,y),表示空间坐标;I(x,y),表示(x,y)处像素值;
L(x,y,σ),表示二维图像的尺寸空间定义;
G(x,y,σ),表示尺度可变高斯函数;
σ,表示图像的平滑程度参数;
D(x,y,σ),表示高斯差分尺度空间;k,表示尺度系数。
将每层的DoG结果的像素与邻域像素进行比较,如果为极值点则为要找的特征点,但是由于这些极值点是离散的,且这些极值点中的部分点属于奇异点,因此,需要重新定位以确定特征点的位置;
在本说明书实施例中,重新进行定位确定特征点位置的方式可以包括:将DoG函数进行曲线拟合,使用泰勒级数展开来得到找到精确位置;
具体的,采用泰勒级数获取特征点位置的算法如下:
f(x)≈f(0)+f′(0)*x+f″(0)*x
其中,x表示位置变量,
f′(0),表示f(x)在x=0的一阶导数值,
f″(0),表示f(x)在x=0的二阶导数值;
确定位置后,即可得到匹配目标的尺寸和方向信息,进而确定真正的极值点的位置。
Figure PCTCN2020137305-appb-000005
Figure PCTCN2020137305-appb-000006
其中,m(x,y),表示(x,y)处的梯度值,
θ(x,y),表示(x,y)处的梯度方向,
L,表示关键点坐标位置的尺度空间值。
具体的,参见图7,其所示为一种获取极值点位置的结构示意图,
从图中可以看出真正的极值点和检测到的极值点。
S609,将每个所述区域图像的特征点进行匹配,重构所述目标俯视图;
在本说明书实施例中,根据图像特征点共坐标的方法,将每个区域图像的特征点进行匹配,可得到一个新的目标俯视图;本申请中采用这种方式获得的目标俯视图更加精确。
S105,将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
在本说明书中预设的深度学习模型可以是全卷积网络模型;
在本说明书实施例中,预设的深度学习模型,如全卷积网络模型可以接受任意大小的图像的输入,再通过反卷积得到输入相同尺寸的上采样,也即是对每个像素都进行分类;
且本申请中将目标俯视图输入预设的深度学习模型之后,得到的输出结果仍然为图像;也即是预设的深度学习模型只是对输入的图像进行予以分割,实现像素级的分类,得到分区图像;
具体的,如图8,其所示为一种对图像的像素进行分类的示意图;
具体的为:将一张尺寸为H×W(其中,H代表图像的高,W代表图像的宽)的图像输入预设的深度学习模型之后,经过卷积(conv)、池化(pool)、非线性(nonlinearity)等操作,第一层结果大小可以变为输入的1/4 2,第二层结果大小可以变为输入的1/8 2,..,第五层结果大小可以变为输入的1/16 2,..,第八层结果大小可以变为输入的1/32 2,随着conv和 pooling次数越来越多,图像尺寸越来小,最小的一层为原图的1/32 2,此时需要进行上采样(upsamping)...将结果放大至原图大小H×W并输出图像(pixelwise output+loss),最终得到的图像会根据训练的图像对每个目标俯视图的像素点进行分类;
在本说明书实施例中,得到分区图像可以包括可行驶区域图像和非可行驶区域图像,分区图像可以为对所述目标俯视图进行分区后的图像,分区图像可以包括车辆可行驶区域,例如:可行驶道路和可行驶路口等信息;非可行驶区域图像可以包括车位线、停车位区域等信息。
S107,扫描所述分区图像,识别出车辆的可行驶区域;
在本说明书实施例中,扫描分区图像中的各个区域信息,确定出车辆的可行驶区域;具体的,所述可行驶区域包括可行驶道路和可行驶路口;
具体的,可以采用直道趋势识别模块识别出可行驶区域中的可行驶道路,采用路口趋势识别模块识别出可行驶区域中的可行驶路口;
在本说明书实施例中,在识别出所述可行驶区域中的可行驶的道路和可行驶路口之前,还包括步骤:
采用预设尺寸的方格对所述分区图像进行扫描,得到车辆的可行驶区域和扫描区域;
基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域;
在本说明书实施例中,所述基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域包括步骤:
基于所述扫描区域,对所述可行驶区域进行膨胀操作,得到膨胀区域;
基于所述扫描区域,对所述膨胀区域进行腐蚀操作,重构所述可行驶区域;
在本说明书实施例中,具体的,在设计时可以根据实际情况选择方格的大小,且本申请对图像采用先膨胀再腐蚀的操作,可以有效去除识别结果中像素缺失或未与主体连接的情况;使获得的可行驶区域更加精确;
S109,基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
在本说明书实施例中,预设驾驶路径中包括至少一条可行驶路径,具体的,可以包括多条可行驶路径;上述生成的路径轨迹可以为多条可行驶路径中的一条。
在本说明书实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹;可以包括以下步骤:
步骤1,基于所述可行驶区域,确定所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布;
在本说明书实施例中,如图9,其所示为一种对可行驶区域的可行驶道路和可行驶路口的识别方法的流程示意图;具体的,如下:
S901,将所述可行驶区域输入道路识别模型,识别出所述可行驶区域中的可行驶道路以及可行驶道路的信息,所述可行驶道路的信息包括可行驶道路的宽度和长度;
在本说明书实施例中,道路识别模型可以是一种识别出道路上的道路线,并给出道路标线的位置信息的道路识别算法;
具体的,在本申请中可以是一种道路的直道趋势识别算法,以识别出可行驶区域的可行驶直道等信息。
具体的,在本说明书实施例中,识别可行驶区域中的可行驶道路的具体方法可以包括以下步骤:
将m*n大小的道路识别结果进行纵向投影,得到每列像素中道路像素的个数h i
h i=h(i),i=0,1,2,......,n
其中,m表示图像的高,n表示图像的宽;
h i,表示第i列像素中道路像素的个数。
h i值域为[0,m],然后对h i的值h进行统计,h的值域为[0,m],不同值的h出现的次数w h为:w h=w(h)
其中,w h,表示不同h值出现的次数。
w h值域为[0,n],找到当w h取得最大值时,记录此时h的大小为h max,即是满足“列”成为道路的阈值,令
h i=h(i)>h max
求出i的最大值i max和最小值i min,即为道路两边在图像中的列位置,也即是道路宽度。
S903,将所述可行驶区域输入路口识别模型,识别出所述可行驶区域中的可行驶路口以及可行驶路口的类型;
在本说明书实施例中,路口识别模型可以是对道路中的路口进行识别的算法,具体的,可以识别出可行驶道路上是否出现可行驶的路口以及是何种路口等信息。
具体的,在本说明书实施例中,识别可行驶区域中的可行驶路口时,基于上述识别出的可行驶道路,在车辆行驶途中,并不是每个路口都需要转弯,当行驶到不需要转弯的路口时,按照直道模式进行识别;若车辆需要转弯则表示为可行驶路口。
S905,基于所述可行驶区域以及所述可行驶区域中的可行驶道路的信息和可行驶路口的类型,确定所述可行驶区域中所述可行驶道路和所述可行驶路口的分布。
在本说明书实施例中,基于上述可行驶道路和可行驶路口的识别方法,可以精确判断可行驶区域中可行驶道路和可行驶路口的分布情况。
在本说明书的另一实施例中,还可以通过车辆的行驶里程和航向角来确定可行驶区域中的可行驶路口和可行驶道路;
具体的,识别可行驶区域中可行驶路口和可行驶道路的方法,包括以下步骤:
实时获取车辆的车速和方向盘转角;
在本说明书实施例中,可以根据CAN信号获取车辆的车速和方向盘转角;
根据所述车辆的车速和所述方向盘转角,确定车辆的前进里程和航向角;
在本说明书实施例中,还可以获取车辆的运行时间,具体的,可以根据车辆的车速和车辆的运行时间计算得到车辆的前进里程;可以根据车辆的方向盘转角计算得到车辆的航向角。
根据所述车辆的前进里程和所述航向角确定可行驶区域中的可行驶路 口和可行驶道路;
在本说明书实施例中,根据车辆的前进里程,当车辆行驶到路口区域时,根据左转或右转选取第5列或第(n-6)列像素:
p i=p(i),i=0,1,2,......,m
p(i)为0或1时,即表示是否为道路,判断道路的宽度;
P i=p i-p i-1′,i=1,2,......,m
其中,p i,表示第5列或第(n-6)列中的第i行是否为道路像素,
p(i)为0或1,即表示是否为道路,
P i表示第i行和第i-1行之间的关系;
当P i>=0时则表示像素道路连续(也即是为可行驶道路),当P i=-1表示像素道路间断,连续出现P i=-1的次数为t 1,当t 1<T -1时忽略间断(其中,T -1为像素间断阈值);按照连续处理,P i置为0,连续出现P i=0的次数为t 0,当t 0>T 0时(其中,T 0为路口宽度阈值),该帧图像中出现路口;连续检测到路口的帧数为t,当t>T frame(其中,T frame)表示连续检测到路口的帧数的阈值)时则判定路口出现,记录下此时车辆的方向θ s,并开始进行转弯,车辆偏转角变化Δθ为:
Δθ=abs(θ s-θ(t))
其中,θ(t)表示t时刻车辆的偏转角,
abs,表示取绝对值函数;
当Δθ>0.8*θ i时,车辆转弯完成,进入直道趋势识别模块,θ i为自学习时记录的路口转角大小。
基于上述方法同样可以识别出可行驶区域中的可行驶路道路和可行驶路口,具体的示意图如图10所示。
步骤2,基于所述可行驶道路和所述可行驶路口的分布生成与所述车辆行驶状态信息对应的路径轨迹;
在本说明书实施例中,可以基于可行驶区域中所述可行驶道路和所述可行驶路口的分布,确定车辆的行驶路线;
根据车辆的行驶路线以及车辆行驶状态信息,生成与所述车辆行驶状态信息对应的路径轨迹;本申请采用该方法可以精确地获得与所述车辆行 驶状态信息对应的路径轨迹。
在本说明书实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还可以包括:
基于所述路径轨迹构建与所述预设驾驶路径对应的地图。
在本说明书实施例中,地图可以是地下车库地图;具体的,可以根据生成的路径轨迹,采用轨迹抽象化算法对路径轨迹进行处理,进而构建出地下车库地图;基于该方法构建的地下车库地图为抽象的路径地图;本申请中该地图可以适用任何场景,无需场端设备的地下车库自动驾驶;且该地下车库地图是一种符合驾驶员行驶习惯的路径规划地图。
在本说明书一个具体的实施例中,可以采用如下方法对路径轨迹进行处理:具体的包括:
道路常有以下5种类型组成:起点,直道,路口,死路和终点,其中,路口分为十字路口和丁字路口。当车辆行驶至路口时,需要进行转弯决策来判断车辆的前进路径,默认约定当前的行驶方向为参考方向。定义路口结构体,包括四个参数:分别为路口编号Node,里程Dist,路口转向信息TurnINF和转角Angle,其中,里程Dist为所在位置与起点的距离。另外,还单独设立行驶标志位PassFlag,其中,“0”为继续行驶,“1”为死路,禁止前行。
具体的下表中所示为一个具体实施例中的行驶情况:
Figure PCTCN2020137305-appb-000007
表所示为一个十字路口,转角Angle默认为左转90度。当从①方向进入时,PassFlag=0表示可以继续前进,进入后TurnINF置1,路口编号Node更新为上一路口编号加1,由于左转为死路,所以该处PassFlag置为1;从②方向进入,PassFlag置1,TurnINF置2,因为是同一路口,Node不变, 由于左转为死路,所以该处PassFlag再次置为1;从③方向进入同②,从④方向进入,即该十字路口为死路,返回出该十字路口回到上一路口,最终PassFlag=1,路口编号Node为上一路口编号。
在本说明书实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:再次在预设驾驶路径上行驶时,包括以下步骤:
实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
在本说明书实施例中,具体的获取方法与上述相同;
根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
扫描所述分区图像,识别出车辆的可行驶区域;
基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径;
在本说明书实施例中,当驾驶员驾驶车辆在预设驾驶路径上行驶时,每次得到的路径轨迹均不相同;
具体的,当车辆再次在预设驾驶路径上行驶时,采用同样的获取路径轨迹的方法可以获得与所述车辆行驶状态信息对应的当前路径轨迹;
在本说明书实施例中,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹之后还包括:
对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图;
在本说明书实施例中,将当前路径轨迹和前次获得的所述路径轨迹中同一点的空间位置对应起来,以便进行信息融合;得到新的路径轨迹,采用此方法对路径轨迹进行校验,以确保获得更加精确地抽象地图,例如可 以为下车库地图。
优选地,在本说明书实施例中,可以重复行驶该预设驾驶路径;得到至少三条路径轨迹;
将每次获得的当前路径轨迹与前次获得的路径轨迹或新的路径轨迹进行多轨迹融合,以获得更加精确的地图。
在本说明书一个具体的实施例中,可以采用最小二乘法求解,以实现多轨迹融合;具体的如下:
例如,第一次学习的行驶轨迹作为基准点集X,第二次学习的行驶轨迹作为待融合点集P。基准点集X和待融合点集P分别为:
X={x 1,x 2,......,x n}
P={p 1,p 2,......,p n}
将点集P旋转和平移,得到目标误差函数:
Figure PCTCN2020137305-appb-000008
其中,E(R,t),表示误差函数,R表示旋转矩阵,
t表示平移矩阵,N p表示点集P中的元素的个数。
具体的,基准点集X和待融合点集P的质心为:
Figure PCTCN2020137305-appb-000009
Figure PCTCN2020137305-appb-000010
其中,μ x表示基准点集X的质心,N x--基准点集X中元素的个数,
μ p表示待融合点集P的质心,
N p表示待融合点集P中元素的个数。
所以得:
X′={x ix}={x 1x,x 2x,……,x nx}={x i′}
P′={p ip}={p 1p,p 2p,……,p np}={p i′}
其中,X′表示基准点集X中每个元素与质心的偏差组成的集合,
P′表示待融合点集P中每个元素与质心的偏差组成的集合。
利用奇异值分解(SVD)分解求最优变换:
Figure PCTCN2020137305-appb-000011
其中,W表示待分解实数矩阵,p iT表示p i′的转置,
U和V是单位正交矩阵,分别称为左右奇异矩阵;
V T表示V的转置,σ 1,σ 2,σ 3—奇异值。
当rank(W)=3时,E(R,t)的最优解唯一,可解出U,V的值。
所以,旋转矩阵R和平移矩阵t分别为:
R=UV T
t=μ x-Rμ P
将旋转矩阵R和平移矩阵t带入上述目标误差函数E(R,t),当求得的目标误差函数E(R,t)足够收敛,两个点集的融合效果如下图11所示。
在本说明书实施例中,所述对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构所述地下车库地图之前,还包括:
H1,判断所述当前路径轨迹与前次获得的所述路径轨迹的重合度是否大于等于预设第一阈值;
在本说明书实施例中,预设第一阈值可以为95%;
H2,若所述当前路径轨迹与前次获得的所述路径轨迹的重合度大于等于预设第一阈值,则将所述当前路径轨迹与前次获得的所述路径轨迹进行融合。
在本说明书实施例中,可以在当前路径轨迹与前次获得的所述路径轨迹的重合度大于等于预设第一阈值时,将所述当前路径轨迹与前次获得的所述路径轨迹进行融合。
H3,若所述当前路径轨迹与前次获得的所述路径轨迹的重合度小于预设第一阈值,则判断所述当前路径轨迹与所述预设驾驶路径的匹配度是否小于或大于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度;
在本说明书实施例中,若所述当前路径轨迹与前次获得的所述路径轨迹的重合度小于预设第一阈值,则可以选择放弃当前路径轨迹和前次获得的所述路径轨迹生成过程中,获得的目标俯视图数量较少的一个路径轨迹;
具体的,可以采用当前路径轨迹与预设驾驶路径的匹配度的进行判断 当前路径轨迹和前次获得的所述路径轨迹生成过程中,获得的目标俯视图数量较少的一个路径轨迹;
H4,若是,则重新生成所述当前路径轨迹。
具体的,在本说明书实施例中,若所述当前路径轨迹与所述预设驾驶路径的匹配度小于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度,则放弃当前路径轨迹,再次在预设驾驶路径上行驶,以再次获得新的当前路径轨迹;以便于后续采用该新的当前路径轨迹与前次获得的所述路径轨迹进行多轨迹融合,重构路径轨迹;
具体的,后续还可以根据重构的路径轨迹,重构与所述预设驾驶路径对应的地图。
在本说明书另一实施例中,若所述当前路径轨迹与所述预设驾驶路径的匹配度大于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度,则放弃前次获得的所述路径轨迹,再次在预设驾驶路径上行驶,以再次获得新的当前路径轨迹;以便于后续采用该新的当前路径轨迹与前次获得的所述当前路径轨迹进行多轨迹融合,重构路径轨迹;
具体的,后续还可以根据重构的路径轨迹,重构与所述预设驾驶路径对应的地图。
本申请中采用上述方法得到的路径轨迹与实际的行车轨迹更接近;不仅能提高自动驾驶中车辆控制的平顺性,而且能降低车辆偏离预定轨迹的风险。
由上述本发明提供的路径构建方法、装置、终端及存储介质的实施例可见,本发明实施例在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;扫描所述分区图像,识别出车辆的可行驶区域;基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路 径;利用本说明书实施例提供的技术方案,车辆通过预设驾驶路径上行驶的过程中,自动学习以获得路径轨迹,以便于后续车辆在自动驾驶过程中自动规划路径。
本发明实施例还提供了一种路径构建装置,如图12所示,其所示为本发明实施例提供的一种路径构建装置的结构示意图;具体的,所述的装置包括:
第一获取模块110,用于在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
目标俯视图获取模块120,用于根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
分区图像获取模块130,用于将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
识别模块140,用于扫描所述分区图像,识别出车辆的可行驶区域;
路径轨迹生成模块150,用于基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
在本说明书实施例中,还包括地图构建模块,用于基于所述路径轨迹构建与所述预设驾驶路径对应的地图。
在本说明书实施例中,所述第一获取模块110包括:
第一获取单元,用于实时获取车辆在预设驾驶路径上行驶过程中的车辆行驶状态信息,所述车辆行驶状态信息包括车辆在所述预设驾驶路径上行驶过程中的行驶策略和驾驶员的驾驶习惯;
第二获取单元,用于根据所述行驶策略和所述驾驶员的驾驶习惯,实时获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像。
在本说明书实施例中,所述第一获取单元包括:
第一获取子单元,用于实时获取车辆的车速和方向盘转角;
第一确定子单元,用于根据所述车辆的车速和所述方向盘转角,确定车辆的前进里程和航向角;
第二确定子单元,用于根据所述车辆的前进里程和所述航向角确定车辆的行驶策略,所述车辆的行驶策略包括在所述可行驶道路上的前进里程以及在所述可行驶路口是否转弯。
在本说明书实施例中,所述第一获取单元还包括:
第二获取子单元,用于实时获取车辆在预设驾驶路径的行驶过程中的运行数据;
第三获取子单元,用于对所述车辆的运行数据进行预处理,获取目标运行数据;
特征提取子单元,用于将所述目标运行数据输入循环神经网络,并从所述循环神经网络提取所述目标运行数据的特征;
驾驶员的驾驶习惯确定子单元,用于将所述特征输入全连接网络,预测得到车辆在预设驾驶路径的行驶过程中驾驶员的驾驶习惯;所述驾驶习惯包括可行驶道路的行驶速度和可行驶路口的转向角度。
在本说明书实施例中,还包括:
第二获取模块,用于再次在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
目标俯视图获取模块,用于根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
分区图像获取模块,用于将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
识别模块,用于扫描所述分区图像,识别出车辆的可行驶区域;
当前路径轨迹生成模块,用于基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径;
在本说明书实施例中,还包括:地图重构模块,用于对所述当前路径 轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构所述地下车库地图。
在本说明书实施例中,还包括:
重合度判断模块,用于判断所述当前路径轨迹与前次获得的所述路径轨迹的重合度是否大于等于预设第一阈值;
轨迹融合模块,用于若所述当前路径轨迹与前次获得的所述路径轨迹的重合度大于等于预设第一阈值,则将所述当前路径轨迹与前次获得的所述路径轨迹进行融合。
在本说明书实施例中,还包括:
匹配度判断模块,用于若所述当前路径轨迹与前次获得的所述路径轨迹的重合度小于预设第一阈值,则判断所述当前路径轨迹与所述预设驾驶路径的匹配度是否小于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度;
当前路径轨迹重构模块,用于若所述当前路径轨迹与所述预设驾驶路径的匹配度小于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度,则重新生成所述当前路径轨迹。
在本说明书实施例中,所述目标俯视图获取模块120包括:
目标图像获取单元,用于基于所述初始图像获得目标图像,所述目标图像包含与所述目标俯视图所在区域重合的区域图像的俯视图;
次数获取单元,用于获取所述目标图像出现的次数;
判断单元,用于判断所述目标图像出现的次数是否大于等于预设第二阈值;
特征点提取单元,用于若是,则提取每个所述目标图像中所述区域图像的特征点;
特征点匹配单元,用于将每个所述区域图像的特征点进行匹配,重构所述目标俯视图。
在本说明书实施例中,所述目标俯视图获取模块120还包括:
对应关系获取单元,用于基于所述非线性差值校正算法获取所述初始图像的俯视图与所述初始图像之间的对应关系,所述对应关系包括所述出使图像的俯视图和所述初始图像之间对应的坐标点;
目标坐标点获取单元,用于基于所述对应关系从所述初始图像中获取目标坐标点;
目标俯视图构建单元,用于基于所述目标坐标点构建与所述初始图像对应的目标俯视图。
在本说明书实施例中,所述路径轨迹生成模块150包括:
第一确定单元,用于基于所述可行驶区域,确定所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布;
路径轨迹生成单元,用于基于所述可行驶道路和所述可行驶路口的分布生成与所述车辆行驶状态信息对应的路径轨迹。
在本说明书实施例中,还包括:
扫描单元,用于采用预设尺寸的方格对所述分区图像进行扫描,得到车辆的可行驶区域和扫描区域;
调整单元,用于基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域。
在本说明书实施例中,所述调整单元包括:
第一调整子单元,用于基于所述扫描区域,对所述可行驶区域进行膨胀操作,得到膨胀区域;
第二调整子单元,用于基于所述扫描区域,对所述膨胀区域进行腐蚀操作,重构所述可行驶区域。
在本说明书实施例中,所述第一确定单元包括:
第一识别子单元,用于将所述可行驶区域输入道路识别模型,识别出所述可行驶区域中的可行驶道路以及可行驶道路的信息,所述可行驶道路的信息包括可行驶道路的宽度和长度;
第二识别子单元,用于将所述可行驶区域输入路口识别模型,识别出所述可行驶区域中的可行驶路口以及可行驶路口的类型;
第三确定子单元,用于基于所述可行驶区域以及所述可行驶区域中的可行驶道路的信息和可行驶路口的类型,确定所述可行驶区域中所述可行驶道路和所述可行驶路口的分布。
本发明实施例提供了一种路径构建终端,所述终端包括处理器和存储 器,所述存储器中存储有至少一条指令或至少一段程序,所述至少一条指令或所述至少一段程序由所述处理器加载并执行以实现如上述方法实施例所述的路径构建方法。
存储器可用于存储软件程序以及模块,处理器通过运行存储在存储器的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、功能所需的应用程序等;存储数据区可存储根据所述设备的使用所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器还可以包括存储器控制器,以提供处理器对存储器的访问。
图13为本发明实施例提供的一种路径构建终端的结构示意图,该路径构建终端的内部构造可包括但不限于:处理器、网络接口及存储器,其中路径构建终端内的处理器、网络接口及存储器可以通过总线或其他方式连接,在本说明书实施例所示图13中以通过总线连接为例。
其中,处理器(或称CPU(Central Processing Unit,中央处理器))是路径构建终端的计算核心以及控制核心。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI、移动通信接口等)。存储器(Memory)是路径构建终端中的记忆设备,用于存放程序和数据。可以理解的是,此处的存储器可以是高速RAM存储设备,也可以是非不稳定的存储设备(non-volatile memory),例如至少一个磁盘存储设备;可选的还可以是至少一个位于远离前述处理器的存储装置。存储器提供存储空间,该存储空间存储了路径构建终端的操作系统,可包括但不限于:Windows系统(一种操作系统),Linux(一种操作系统)等等,本发明对此并不作限定;并且,在该存储空间中还存放了适于被处理器加载并执行的一条或一条以上的指令,这些指令可以是一个或一个以上的计算机程序(包括程序代码)。在本说明书实施例中,处理器加载并执行存储器中存放的一条或一条以上指令,以实现上述方法实施例提供的路径构建方法。
本发明的实施例还提供了一种计算机可读存储介质,所述存储介质可 设置于路径构建终端之中以保存用于实现方法实施例中的一种路径构建方法相关的至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、该至少一段程序、该代码集或指令集可由电子设备的处理器加载并执行以实现上述方法实施例提供的路径构建方法。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是:上述本发明实施例先后顺序仅仅为了描述,不代表实施例的优劣。且上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置和服务器实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所揭露的仅为本发明一种较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (18)

  1. 一种路径构建方法,其特征在于:所述的方法包括:
    在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
    根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
    将所述目标俯视图输入预设的深度学习模型,对所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
    扫描所述分区图像,识别出车辆的可行驶区域;
    基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
  2. 根据权利要求1所述的路径构建方法,其特征在于:所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:
    基于所述路径轨迹构建与所述预设驾驶路径对应的地图。
  3. 根据权利要求1所述的路径构建方法,其特征在于:所述实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像包括:
    实时获取车辆在预设驾驶路径上行驶过程中的车辆行驶状态信息,所述车辆行驶状态信息包括车辆在所述预设驾驶路径上行驶过程中的行驶策略和驾驶员的驾驶习惯;
    根据所述行驶策略和所述驾驶员的驾驶习惯,实时获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像。
  4. 根据权利要求3所述的路径构建方法,其特征在于:所述可行驶区域包括可行驶道路和可行驶路口,获取车辆在所述预设驾驶路径上行驶过程中的行驶策略,包括:
    实时获取车辆的车速和方向盘转角;
    根据所述车辆的车速和所述方向盘转角,确定车辆的前进里程和航向角;
    根据所述车辆的前进里程和所述航向角确定车辆的行驶策略,所述车辆的行驶策略包括在所述可行驶道路上的前进里程以及在所述可行驶路口是否转弯。
  5. 根据权利要求3所述的路径构建方法,其特征在于:所述可行驶区域包括可行驶道路和可行驶路口;获取车辆在预设驾驶路径上行驶过程中驾驶员的驾驶习惯包括:
    实时获取车辆在预设驾驶路径的行驶过程中的运行数据;
    对所述车辆的运行数据进行预处理,获取目标运行数据;
    将所述目标运行数据输入循环神经网络,并从所述循环神经网络提取所述目标运行数据的特征;
    将所述特征输入全连接网络,预测得到车辆在预设驾驶路径的行驶过程中驾驶员的驾驶习惯;所述驾驶习惯包括可行驶道路的行驶速度和可行驶路口的转向角度。
  6. 根据权利要求1所述的路径构建方法,其特征在于:所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:再次在预设驾驶路径上行驶时,
    实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
    根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
    将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
    扫描所述分区图像,识别出车辆的可行驶区域;
    基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
  7. 根据权利要求6所述的路径构建方法,其特征在于:所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹之后还包括:
    对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图。
  8. 根据权利要求7所述的路径构建方法,其特征在于:所述对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图之前,还包括:
    判断所述当前路径轨迹与前次获得的所述路径轨迹的重合度是否大于等于预设第一阈值;
    若所述当前路径轨迹与前次获得的所述路径轨迹的重合度大于等于预设第一阈值,则将所述当前路径轨迹与前次获得的所述路径轨迹进行融合。
  9. 根据权利要求8所述的路径构建方法,其特征在于:还包括:
    若所述当前路径轨迹与前次获得的所述路径轨迹的重合度小于预设第一阈值,则判断所述当前路径轨迹与所述预设驾驶路径的匹配度是否小于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度;
    若是,则重新生成所述当前路径轨迹。
  10. 根据权利要求1所述的路径构建方法,其特征在于:所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图包括:
    基于所述初始图像获得目标图像,所述目标图像包含与所述目标俯视图所在区域重合的区域图像的俯视图;
    获取所述目标图像出现的次数;
    判断所述目标图像出现的次数是否大于等于预设第二阈值;
    若是,则提取每个所述目标图像中所述区域图像的特征点;
    将每个所述区域图像的特征点进行匹配,重构所述目标俯视图。
  11. 根据权利要求1所述的路径构建方法,其特征在于:所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图,还包括:
    基于所述非线性差值校正算法获取所述初始图像的俯视图与所述初始图像之间的对应关系,所述对应关系包括所述初始图像的俯视图和所述初始图像之间对应的坐标点;
    基于所述对应关系从所述初始图像中获取目标坐标点;
    基于所述目标坐标点构建与所述初始图像对应的目标俯视图。
  12. 根据权利要求1所述的路径构建方法,其特征在于:所述可行驶区域包括可行驶道路和可行驶路口,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹包括:
    基于所述可行驶区域,确定所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布;
    基于所述可行驶道路和所述可行驶路口的分布生成与所述车辆行驶状态信息对应的路径轨迹。
  13. 根据权利要求12所述的路径构建方法,其特征在于:所述识别所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布之前,还包括:
    采用预设尺寸的方格对所述分区图像进行扫描,得到车辆的可行驶区域和扫描区域;
    基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域。
  14. 根据权利要求13所述的路径构建方法,其特征在于:所述基于所 述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域,包括:
    基于所述扫描区域,对所述可行驶区域进行膨胀操作,得到膨胀区域;
    基于所述扫描区域,对所述膨胀区域进行腐蚀操作,重构所述可行驶区域。
  15. 根据权利要求12所述的路径构建方法,其特征在于:识别所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布包括:
    将所述可行驶区域输入道路识别模型,识别出所述可行驶区域中的可行驶道路以及可行驶道路的信息,所述可行驶道路的信息包括可行驶道路的宽度和长度;
    将所述可行驶区域输入路口识别模型,识别出所述可行驶区域中的可行驶路口以及可行驶路口的类型;
    基于所述可行驶区域以及所述可行驶区域中的可行驶道路的信息和可行驶路口的类型,确定所述可行驶区域中所述可行驶道路和所述可行驶路口的分布。
  16. 一种路径构建装置,其特征在于:所述的装置包括:
    第一获取模块,用于在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;
    目标俯视图获取模块,用于根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;
    分区图像获取模块,用于将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;
    识别模块,用于扫描所述分区图像,识别出车辆的可行驶区域;
    路径轨迹生成模块,用于基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
  17. 一种路径构建终端,其特征在于:所述终端包括处理器和存储器,所述存储器中存储有至少一条指令或至少一段程序,所述至少一条指令或所述至少一段程序由所述处理器加载并执行以实现如权利要求1至15任一项所述的路径构建方法。
  18. 一种计算机可读存储介质,其特征在于:所述存储介质中存储有至少一条指令或至少一段程序,所述至少一条指令或所述至少一段程序由处理器加载并执行如权利要求1至15任一项所述的路径构建方法。
PCT/CN2020/137305 2021-02-08 2021-02-08 一种路径构建方法、装置、终端及存储介质 WO2022165614A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/137305 WO2022165614A1 (zh) 2021-02-08 2021-02-08 一种路径构建方法、装置、终端及存储介质
CN202080108019.1A CN117015814A (zh) 2021-02-08 2021-02-08 一种路径构建方法、装置、终端及存储介质
EP20968180.8A EP4296888A1 (en) 2021-02-08 2021-02-08 Path construction method and apparatus, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137305 WO2022165614A1 (zh) 2021-02-08 2021-02-08 一种路径构建方法、装置、终端及存储介质

Publications (1)

Publication Number Publication Date
WO2022165614A1 true WO2022165614A1 (zh) 2022-08-11

Family

ID=82742572

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137305 WO2022165614A1 (zh) 2021-02-08 2021-02-08 一种路径构建方法、装置、终端及存储介质

Country Status (3)

Country Link
EP (1) EP4296888A1 (zh)
CN (1) CN117015814A (zh)
WO (1) WO2022165614A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117234220A (zh) * 2023-11-14 2023-12-15 中国市政工程西南设计研究总院有限公司 一种prt智能小车行驶控制方法及系统
CN117356546A (zh) * 2023-12-01 2024-01-09 南京禄口国际机场空港科技有限公司 一种机场草坪用的喷雾车的控制方法、系统及存储介质
CN117495847A (zh) * 2023-12-27 2024-02-02 安徽蔚来智驾科技有限公司 路口检测方法、可读存储介质及智能设备
CN117870713A (zh) * 2024-03-11 2024-04-12 武汉视普新科技有限公司 基于大数据车载影像的路径规划方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6025790A (en) * 1997-08-04 2000-02-15 Fuji Jukogyo Kabushiki Kaisha Position recognizing system of autonomous running vehicle
CN108388641A (zh) * 2018-02-27 2018-08-10 广东方纬科技有限公司 一种基于深度学习的交通设施地图生成方法与系统
CN111325799A (zh) * 2018-12-16 2020-06-23 北京初速度科技有限公司 一种大范围高精度的静态环视自动标定图案及系统
CN111753639A (zh) * 2020-05-06 2020-10-09 上海欧菲智能车联科技有限公司 感知地图生成方法、装置、计算机设备和存储介质
CN112212872A (zh) * 2020-10-19 2021-01-12 合肥工业大学 基于激光雷达和导航地图的端到端自动驾驶方法及系统
CN112270306A (zh) * 2020-11-17 2021-01-26 中国人民解放军军事科学院国防科技创新研究院 一种基于拓扑路网的无人车轨迹预测与导航方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6025790A (en) * 1997-08-04 2000-02-15 Fuji Jukogyo Kabushiki Kaisha Position recognizing system of autonomous running vehicle
CN108388641A (zh) * 2018-02-27 2018-08-10 广东方纬科技有限公司 一种基于深度学习的交通设施地图生成方法与系统
CN111325799A (zh) * 2018-12-16 2020-06-23 北京初速度科技有限公司 一种大范围高精度的静态环视自动标定图案及系统
CN111753639A (zh) * 2020-05-06 2020-10-09 上海欧菲智能车联科技有限公司 感知地图生成方法、装置、计算机设备和存储介质
CN112212872A (zh) * 2020-10-19 2021-01-12 合肥工业大学 基于激光雷达和导航地图的端到端自动驾驶方法及系统
CN112270306A (zh) * 2020-11-17 2021-01-26 中国人民解放军军事科学院国防科技创新研究院 一种基于拓扑路网的无人车轨迹预测与导航方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117234220A (zh) * 2023-11-14 2023-12-15 中国市政工程西南设计研究总院有限公司 一种prt智能小车行驶控制方法及系统
CN117234220B (zh) * 2023-11-14 2024-03-01 中国市政工程西南设计研究总院有限公司 一种prt智能小车行驶控制方法及系统
CN117356546A (zh) * 2023-12-01 2024-01-09 南京禄口国际机场空港科技有限公司 一种机场草坪用的喷雾车的控制方法、系统及存储介质
CN117356546B (zh) * 2023-12-01 2024-02-13 南京禄口国际机场空港科技有限公司 一种机场草坪用的喷雾车的控制方法、系统及存储介质
CN117495847A (zh) * 2023-12-27 2024-02-02 安徽蔚来智驾科技有限公司 路口检测方法、可读存储介质及智能设备
CN117495847B (zh) * 2023-12-27 2024-03-19 安徽蔚来智驾科技有限公司 路口检测方法、可读存储介质及智能设备
CN117870713A (zh) * 2024-03-11 2024-04-12 武汉视普新科技有限公司 基于大数据车载影像的路径规划方法及系统

Also Published As

Publication number Publication date
CN117015814A (zh) 2023-11-07
EP4296888A1 (en) 2023-12-27

Similar Documents

Publication Publication Date Title
WO2022165614A1 (zh) 一种路径构建方法、装置、终端及存储介质
CN111874006B (zh) 路线规划处理方法和装置
Cultrera et al. Explaining autonomous driving by learning end-to-end visual attention
US20220165043A1 (en) Photorealistic Image Simulation with Geometry-Aware Composition
DE102019119162A1 (de) Posenschätzung
Wang et al. End-to-end autonomous driving: An angle branched network approach
DE102020113848A1 (de) Ekzentrizitätsbildfusion
CN108107897B (zh) 实时传感器控制方法及装置
JP6778842B2 (ja) 画像処理方法およびシステム、記憶媒体およびコンピューティングデバイス
US20160253567A1 (en) Situation analysis for a driver assistance system
DE102022114201A1 (de) Neuronales Netz zur Objekterfassung und -Nachverfolgung
DE102021101270A1 (de) Trainieren eines neuronalen netzwerks eines fahrzeugs
US11465620B1 (en) Lane generation
CN113479105A (zh) 一种基于自动驾驶车辆的智能充电方法及智能充电站
CN111210411B (zh) 图像中灭点的检测方法、检测模型训练方法和电子设备
DE102021109389A1 (de) Schätzung einer virtuellen fahrspur mittels einer rekursiven selbstorganisierenden karte
CN111830949B (zh) 自动驾驶车辆控制方法、装置、计算机设备和存储介质
Wang et al. Bevgpt: Generative pre-trained large model for autonomous driving prediction, decision-making, and planning
Holder et al. Learning to drive: End-to-end off-road path prediction
WO2023192397A1 (en) Capturing and simulating radar data for autonomous driving systems
CN108363387B (zh) 传感器控制方法及装置
CN114771510A (zh) 基于路线图的泊车方法、泊车系统及电子设备
US20240133696A1 (en) Path construction method and apparatus, terminal, and storage medium
CN111077893A (zh) 一种基于多灭点的导航方法、电子设备和存储介质
CN114620059B (zh) 一种自动驾驶方法及其系统、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20968180

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080108019.1

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 18276332

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020968180

Country of ref document: EP

Effective date: 20230908