WO2022165614A1 - 一种路径构建方法、装置、终端及存储介质 - Google Patents
一种路径构建方法、装置、终端及存储介质 Download PDFInfo
- Publication number
- WO2022165614A1 WO2022165614A1 PCT/CN2020/137305 CN2020137305W WO2022165614A1 WO 2022165614 A1 WO2022165614 A1 WO 2022165614A1 CN 2020137305 W CN2020137305 W CN 2020137305W WO 2022165614 A1 WO2022165614 A1 WO 2022165614A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- drivable
- path
- vehicle
- driving
- preset
- Prior art date
Links
- 238000010276 construction Methods 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 claims abstract description 81
- 230000008569 process Effects 0.000 claims abstract description 33
- 238000013136 deep learning model Methods 0.000 claims abstract description 27
- 238000012937 correction Methods 0.000 claims abstract description 23
- 238000005192 partition Methods 0.000 claims description 46
- 230000004927 fusion Effects 0.000 claims description 15
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 230000000306 recurrent effect Effects 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000003628 erosive effect Effects 0.000 claims description 5
- 238000000638 solvent extraction Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 13
- 239000011159 matrix material Substances 0.000 description 12
- 230000006870 function Effects 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000013519 translation Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/02—Registering or indicating driving, working, idle, or waiting time only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- the invention relates to the technical field of self-learning of autonomous vehicles, and in particular, to a path construction method, device, terminal and storage medium.
- the present invention discloses a path construction method. During the process of driving on the preset driving path in the underground garage, the vehicle automatically learns to obtain the path trajectory, so as to facilitate the subsequent vehicles in the automatic driving process. path is automatically planned.
- the present invention provides a path construction method, the method includes:
- the method includes:
- a top view of the target corresponding to the initial image is obtained by calculating through a nonlinear difference correction algorithm
- the partition image includes an image of a drivable area and an image of a non-drivable area
- a path trajectory corresponding to the vehicle driving state information is generated based on the drivable area, where the path trajectory is one of the preset driving paths.
- the method further includes:
- a map corresponding to the preset driving path is constructed based on the path trajectory.
- the obtaining the vehicle driving state information in real time and the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information includes:
- vehicle traveling state information includes the driving strategy and the driver's driving habits during the vehicle traveling on the preset driving path;
- an initial image of the surrounding environment of the preset driving path is acquired in real time during the process of the vehicle traveling on the preset driving path.
- the drivable area includes a drivable road and a drivable intersection
- obtaining a driving strategy of the vehicle during driving on the preset driving path includes:
- the driving strategy of the vehicle is determined according to the mileage of the vehicle and the heading angle, and the driving strategy of the vehicle includes the mileage on the drivable road and whether to turn at the drivable intersection.
- the drivable area includes a drivable road and a drivable intersection; obtaining the driving habits of the driver during the process of the vehicle traveling on the preset driving path includes:
- the features are input into the fully connected network, and the driving habit of the driver during the driving process of the vehicle on the preset driving path is predicted; the driving habit includes the driving speed of the drivable road and the steering angle of the drivable intersection.
- the method further includes: when driving on the preset driving path again,
- a top view of the target corresponding to the initial image is obtained by calculating through a nonlinear difference correction algorithm
- a current path track corresponding to the vehicle driving state information is generated based on the drivable area, where the path track is one of the preset driving paths.
- the method further includes:
- Multi-track fusion is performed on the current path track and the previously obtained path track, and a map corresponding to the preset driving path is reconstructed.
- the method before performing multi-trajectory fusion on the current path trajectory and the previously obtained path trajectory and reconstructing the map corresponding to the preset driving path, the method further includes:
- the current path trajectory and the previously obtained path trajectory are fused.
- it also includes:
- the degree of coincidence between the current path trajectory and the previously obtained path trajectory is smaller than a preset first threshold, it is determined whether the matching degree between the current path trajectory and the preset driving path is smaller than the previously obtained the matching degree of the path trajectory and the preset driving path;
- the calculating and obtaining, according to the initial image, the target top view corresponding to the initial image through a nonlinear difference correction algorithm includes:
- the target image including a top view of a region image that overlaps with the region where the target top view is located;
- the feature points of each of the region images are matched to reconstruct the top view of the target.
- calculating and obtaining a top view of the target corresponding to the initial image through a nonlinear difference correction algorithm according to the initial image further comprising:
- a top view of the target corresponding to the initial image is constructed based on the target coordinate points.
- the drivable area includes a drivable road and a drivable intersection
- the generating a path trajectory corresponding to the vehicle driving state information based on the drivable area includes:
- a path trajectory corresponding to the vehicle driving state information is generated based on the distribution of the drivable roads and the drivable intersections.
- the method before the identifying the drivable roads, the drivable intersections in the drivable area, and the distribution of the drivable roads and the drivable intersections, the method further includes:
- the drivable area is adjusted to reconstruct the drivable area.
- the adjustment of the drivable area based on the scan area, and the reconstruction of the drivable area includes:
- an erosion operation is performed on the expanded area to reconstruct the drivable area.
- identifying drivable roads, drivable intersections in the drivable area, and distribution of the drivable roads and the drivable intersections includes:
- the distribution of the drivable roads and the drivable intersections in the drivable area is determined based on the drivable area and the information of the drivable roads in the drivable area and the type of the drivable intersection.
- the present invention also provides a path construction device, the device includes:
- a first acquisition module configured to acquire, in real time, vehicle driving state information and an initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information when driving on a preset driving path;
- a target top view acquisition module configured to calculate and obtain a target top view corresponding to the initial image through a nonlinear difference correction algorithm according to the initial image
- a partition image acquisition module configured to input the top view of the target into a preset deep learning model, classify the pixel points of the top view of the target input into the preset deep learning model, and obtain a partition image, where the partition image includes drivable Area images and non-drivable area images;
- an identification module for scanning the partition image to identify the drivable area of the vehicle
- a path trajectory generation module configured to generate a path trajectory corresponding to the vehicle driving state information based on the drivable area, where the path trajectory is one of the preset driving paths.
- the present invention also provides a path construction terminal, the terminal includes a processor and a memory, the memory stores at least one instruction or at least a piece of program, and the at least one instruction or the at least one piece of program is executed by the processor Load and execute to implement the path building method described above.
- the present invention also provides a computer-readable storage medium, where at least one instruction or at least one piece of program is stored in the storage medium, and the at least one instruction or at least one piece of program is loaded by a processor and executed as described above Path building method.
- the vehicle automatically learns to obtain the path trajectory during the process of driving on the preset driving path in the underground garage, so that the subsequent vehicle can automatically plan the path during the automatic driving process.
- FIG. 1 is a schematic flowchart of a path construction method according to an embodiment of the present invention.
- FIG. 2 is a schematic flowchart of obtaining a vehicle driving strategy according to an embodiment of the present invention
- FIG. 3 is a schematic flowchart of obtaining the driving habits of a driver according to an embodiment of the present invention
- FIG. 4 is a schematic flowchart of acquiring a top view of a target according to an embodiment of the present invention
- FIG. 5 is a schematic diagram of a top view of acquiring an initial image according to an embodiment of the present invention.
- FIG. 6 is another schematic flowchart of acquiring a top view of a target according to an embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of obtaining an extreme point position according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram of classifying pixels of an image according to an embodiment of the present invention.
- FIG. 9 is a schematic flowchart of a method for identifying a drivable road and a drivable intersection in a drivable area according to an embodiment of the present invention.
- FIG. 10 is a schematic diagram of a recognition result of a drivable road and a drivable intersection in a drivable area provided by an embodiment of the present invention.
- FIG. 11 is an effect diagram of a path trajectory fusion provided by an embodiment of the present invention.
- FIG. 12 is a schematic structural diagram of a path construction apparatus provided by an embodiment of the present invention.
- FIG. 13 is a schematic structural diagram of a path construction terminal according to an embodiment of the present invention.
- the path construction method of the present application is applied to the field of automatic driving. Specifically, a human-driven vehicle is used to drive in an underground garage at least once, so that the vehicle can automatically learn the path of the underground garage, and then an abstract path map is established to facilitate subsequent vehicles. Autopilot according to the route map.
- the following describes the path construction method of the present invention based on the above-mentioned system with reference to FIG. 1 , which can be applied to the path construction method of autonomous vehicles.
- the present invention can be, but is not limited to, suitable for sealed scenarios, such as underground garages. Method of virtual map of underground garage.
- FIG. 1 is a schematic flowchart of a path construction method provided by an embodiment of the present invention.
- This specification provides the method operation steps as described in the embodiment or the flowchart, but is based on routines; or without creative work More or fewer operational steps may be included.
- the sequence of steps enumerated in the embodiments is only one of the execution sequences of many steps, and does not represent a unique execution sequence.
- the path construction method may be executed in the sequence of the methods shown in the embodiments or the accompanying drawings. Specifically as shown in Figure 1, the method includes:
- the driver can manually drive the automatic driving vehicle to drive on the preset driving path
- the preset driving path may be a drivable path that already exists in the preset driving area; for example, it may be at least one drivable road and a drivable intersection that already exist in an underground garage;
- the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information can be acquired in real time through the front-view camera of the vehicle;
- the initial image may be a two-dimensional image
- the vehicle automatically acquires vehicle driving state information and an initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information in real time;
- the real-time acquisition of the vehicle driving state information and the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information includes the following steps:
- Step 1 acquiring in real time the vehicle driving state information during the vehicle running on the preset driving path, where the vehicle driving state information includes the driving strategy and the driver's driving habits during the vehicle traveling on the preset driving path;
- FIG. 2 is a schematic flowchart of obtaining a vehicle driving strategy according to an embodiment of the present invention
- the drivable area may include a drivable road and a drivable intersection
- the vehicle speed and steering wheel angle can be obtained according to the vehicle controller area network (Controller Area Network, CAN) signal;
- vehicle controller area network Controller Area Network, CAN
- S203 determine the forward mileage and heading angle of the vehicle according to the speed of the vehicle and the steering wheel angle
- the running time of the vehicle can also be obtained.
- the forward mileage of the vehicle can be calculated according to the speed of the vehicle and the running time of the vehicle;
- the heading angle of the vehicle can be calculated according to the steering wheel angle of the vehicle.
- S205 Determine a driving strategy of the vehicle according to the forward mileage of the vehicle and the heading angle, where the driving strategy of the vehicle includes the forward mileage on the drivable road and whether to turn at the drivable intersection.
- the driving trend of the vehicle may be determined according to the forward mileage of the vehicle and the heading angle;
- the vehicle driving strategy may include driving data and driving requirements of the vehicle, such as the mileage on the drivable road and whether to turn at the drivable intersection.
- the method of acquiring the vehicle driving strategy in the present application can accurately acquire the driving strategy of the vehicle when the vehicle is driving on the preset driving path, so as to facilitate the subsequent acquisition of the preset driving process of the vehicle when the vehicle is driving on the preset driving path according to the driving strategy of the vehicle.
- FIG. 3 is a schematic flowchart of obtaining the driving habit of a driver according to an embodiment of the present invention
- the drivable area may include a drivable road and a drivable intersection
- the operation data of the vehicle during the driving process of the preset driving path may include the steering angle of the vehicle, the steering acceleration, the speed of the vehicle, the acceleration of the vehicle, the accelerator pedal, and the brake and other operation data of the vehicle;
- the running data of the vehicle during the driving process of the preset driving path may also include driving video.
- the driving trajectory of the vehicle may be determined according to the driving video.
- a time window needs to be established, and the running data of the vehicle before and after the change of the running track of the vehicle is acquired within the time window;
- the running data of the vehicle are also different;
- the preprocessing of the running data of the vehicle may be the preprocessing of the running data of the vehicle obtained within the time window, specifically, the speed, acceleration, steering angle and steering acceleration of the vehicle may be preprocessed. data preprocessing;
- the maximum value, minimum value and average value of data such as the speed, acceleration, steering angle and steering acceleration of the vehicle can be obtained respectively; specifically, the maximum value, minimum value and average value of each obtained operation data are the target operation data.
- the target operation data is input into a recurrent neural network, and the feature of the target operation data is extracted from the recurrent neural network;
- the features of the target operation data can be extracted
- the features of the target operation data are extracted;
- the driving habit includes the driving speed of the drivable road and the steering angle of the drivable intersection.
- the control feature of the vehicle is preset according to the feature, and the control feature may include the driving speed of the drivable road and the steering angle of the drivable intersection;
- the driving habits of the driver can be obtained according to the control characteristics of the vehicle during driving.
- the driving habit of the driver can be obtained by effectively predicting the driving habit of the driver according to the operation data of the vehicle when the vehicle is driving on the preset driving path, so as to facilitate the subsequent acquisition of the driving habit of the vehicle according to the driving habit of the driver.
- Step 2 according to the driving strategy and the driving habit of the driver, obtain in real time an initial image of the environment around the preset driving path during the process of the vehicle traveling on the preset driving path;
- the obtained initial image corresponds to the driving strategy of the vehicle and the driving habits of the driver;
- the number of initial images of the surrounding environment of the preset driving path obtained by the vehicle, as well as the viewing angle and pixels of the images are different.
- FIG. 4 is a schematic flowchart of acquiring a top view of a target provided by an embodiment of the present invention. the details are as follows:
- the target top view corresponding to the initial image is calculated and obtained by a nonlinear difference correction algorithm, including:
- the method before acquiring the correspondence between the top view of the initial image and the initial image, the method further includes: acquiring the top view of the initial image;
- FIG. 5 is a schematic diagram of a top view of acquiring an initial image
- the specific algorithm for obtaining the correspondence between the top view and the initial image by setting the perspective matrix is as follows:
- (x, y) are the coordinates of the point in the top view
- (X/k, Y/k) are the coordinates of the corresponding point in the de-distorted image
- the obtaining the correspondence between the top view of the initial image and the initial image based on the nonlinear difference correction algorithm includes:
- the first pixel point of the top view obtained above, and find the second pixel point corresponding to the first pixel point in the pixel points of the initial image; the corresponding relationship between the first pixel point and the second pixel point can be the initial pixel point. the correspondence between the top view of the image and the initial image;
- the correspondence between the top view of the initial image and the initial image may be directly obtained based on the nonlinear difference correction algorithm, and the correspondence may include all The coordinate point corresponding to the top view and the initial image; in this application, the target top view of the initial image can be quickly acquired.
- the target coordinate point corresponding to the target top view can be directly found in the initial image based on the obtained corresponding relationship.
- a top view of the target corresponding to the initial image can be directly constructed based on the obtained target coordinate points.
- FIG. 6 is a schematic flowchart of another acquisition of a top view of a target provided by an embodiment of the present invention. the details are as follows:
- S601 obtaining a target image based on the initial image, where the target image includes a top view of a region image that overlaps with the region where the target top view is located;
- the vehicle can obtain several top-view images corresponding to the initial images during the driving process.
- the same area may include multiple top-view images from different perspectives of the front-view camera;
- the target image may be multiple An overhead image that simultaneously covers the same object or the same area in the overhead image (specifically, it may include an area image that overlaps with the area where the target overhead image is located);
- the area image of the target image coincides with the top view of the target.
- the target image may also appear multiple times
- S605 determine whether the number of times the target image appears is greater than or equal to a preset second threshold
- the number of times the target image appears may be greater than or equal to a preset second threshold; the preset second threshold may be 50 times;
- the target image When the number of occurrences of the target image is less than the preset second threshold, the target image can be regarded as invalid; the use can be abandoned or re-acquired until the number of occurrences of the top-down image exceeds the preset second threshold;
- the Gaussian algorithm can be used to extract the feature points of the regional image
- Gaussian blurring can be performed on the target image first, and then the difference operator (DoG) can be obtained by subtracting different Gaussian blurring results:
- (x, y), represents the spatial coordinates;
- I(x, y), represents the pixel value at (x, y);
- L(x, y, ⁇ ) represents the definition of the size space of the two-dimensional image
- ⁇ represents the smoothness parameter of the image
- D(x, y, ⁇ ), represents the Gaussian difference scale space; k, represents the scale coefficient.
- the method of re-positioning to determine the position of the feature point may include: performing curve fitting on the DoG function, and using Taylor series expansion to obtain the exact position;
- the algorithm for obtaining the position of the feature point by using the Taylor series is as follows:
- the size and direction information of the matching target can be obtained, and then the position of the real extreme point can be determined.
- m(x, y) represents the gradient value at (x, y)
- ⁇ (x,y) represents the gradient direction at (x,y)
- L represents the scale space value of the coordinate position of the key point.
- FIG. 7 is a schematic structural diagram of obtaining the position of the extreme point
- a new target top view can be obtained by matching the feature points of each regional image according to the method of co-coordinates of image feature points; the target top view obtained by this method in this application is more accurate.
- the deep learning model preset in this specification can be a fully convolutional network model
- a preset deep learning model such as a fully convolutional network model, can accept input of images of any size, and then obtain upsampling of the same size of the input through deconvolution, that is, for each pixel sort;
- the output result obtained is still an image; that is, the preset deep learning model only divides the input image to achieve pixel-level classification and obtain partitions. image;
- FIG. 8 it is a schematic diagram of classifying pixels of an image
- the result size of the first layer can be changed to 1/4 2 of the input
- the result size of the second layer can be changed to 1/8 2 of the input
- the result size of the fifth layer can be changed to The input 1/16 2
- the result size of the eighth layer can be changed to 1/32 2 of the input.
- the obtained partition image may include an image of a drivable area and an image of a non-drivable area
- the partition image may be an image obtained by partitioning the top view of the target
- the partition image may include a drivable area of the vehicle, for example:
- Information such as driving roads and drivable intersections; images of non-drivable areas may include information such as parking space lines and parking space areas.
- each area information in the partition image is scanned to determine the drivable area of the vehicle; specifically, the drivable area includes drivable roads and drivable intersections;
- the straight road trend identification module may be used to identify drivable roads in the drivable area
- the intersection trend identification module may be used to identify drivable intersections in the drivable area
- the adjustment of the drivable area based on the scan area, and the reconstruction of the drivable area includes the steps of:
- the size of the grid can be selected according to the actual situation during design, and the application adopts the operation of first expansion and then erosion on the image, which can effectively remove pixels missing or not connected to the main body in the recognition result. situation; make the obtained drivable area more precise;
- the preset driving path includes at least one drivable path, specifically, may include multiple drivable paths; the above-generated path trajectory may be one of the multiple drivable paths.
- the generating a path trajectory corresponding to the vehicle driving state information based on the drivable area may include the following steps:
- Step 1 Based on the drivable area, determine drivable roads, drivable intersections in the drivable area, and the distribution of the drivable roads and the drivable intersections;
- FIG. 9 it is a schematic flowchart of a method for identifying a drivable road and a drivable intersection in a drivable area; specifically, as follows:
- S901 input the drivable area into a road recognition model, and identify the drivable road in the drivable area and the information of the drivable road, and the information of the drivable road includes the width and length of the drivable road;
- the road recognition model may be a road recognition algorithm that recognizes road lines on the road and provides location information of road markings;
- it may be a road straight trend identification algorithm to identify information such as drivable straights in the drivable area.
- a specific method for identifying a drivable road in a drivable area may include the following steps:
- the road recognition result of size m*n is longitudinally projected to obtain the number of road pixels h i in each column of pixels:
- n the width of the image
- w h represents the number of occurrences of different h values.
- the value range of w h is [0,n].
- w h max the size of h at this time is recorded as h max , which is to satisfy the threshold for “column” to become a road.
- the maximum value i max and the minimum value i min of i are obtained, that is, the column positions of both sides of the road in the image, that is, the width of the road.
- S903 input the drivable area into an intersection identification model, and identify the drivable intersection in the drivable area and the type of the drivable intersection;
- the intersection identification model may be an algorithm for identifying intersections in a road, and specifically, it may identify information such as whether a drivable intersection appears on a drivable road and what kind of intersection it is.
- the distribution of drivable roads and drivable intersections in the drivable area can be accurately determined.
- the drivable intersection and drivable road in the drivable area may also be determined by the mileage and heading angle of the vehicle;
- the method for identifying drivable intersections and drivable roads in a drivable area includes the following steps:
- the vehicle speed and steering wheel angle can be obtained according to the CAN signal
- the running time of the vehicle can also be obtained.
- the mileage of the vehicle can be calculated according to the speed of the vehicle and the running time of the vehicle;
- the heading angle of the vehicle can be calculated according to the steering wheel angle of the vehicle.
- the 5th column or the (n-6)th column of pixels is selected according to the left turn or the right turn:
- p(i) When p(i) is 0 or 1, it means whether it is a road, and the width of the road is judged;
- p i indicating whether the i-th row in the 5th column or the (n-6)th column is a road pixel
- p(i) is 0 or 1, which means whether it is a road
- Pi represents the relationship between row i and row i-1
- ⁇ (t) represents the deflection angle of the vehicle at time t
- drivable roads and drivable intersections in the drivable area can also be identified, and a specific schematic diagram is shown in FIG. 10 .
- Step 2 generating a path trajectory corresponding to the vehicle driving state information based on the distribution of the drivable roads and the drivable intersections;
- the driving route of the vehicle may be determined based on the distribution of the drivable roads and the drivable intersections in the drivable area;
- the path trajectory corresponding to the vehicle driving status information is generated; the present application can accurately obtain the path trajectory corresponding to the vehicle driving status information by using this method.
- a map corresponding to the preset driving path is constructed based on the path trajectory.
- the map may be an underground garage map; specifically, according to the generated path track, a track abstraction algorithm may be used to process the path track, and then an underground garage map may be constructed; an underground garage map constructed based on this method It is an abstract path map; in this application, the map can be applied to any scene, without the need for automatic driving of the underground garage of the field equipment; and the underground garage map is a path planning map that conforms to the driving habits of drivers.
- the following methods can be used to process the path trajectory: specifically including:
- Roads are often composed of the following five types: starting point, straight road, intersection, dead end and end point, among which intersections are divided into crossroads and T-junctions.
- the current driving direction is the reference direction.
- intersection structure including four parameters: intersection number Node, mileage Dist, intersection turning information TurnINF and corner Angle, where mileage Dist is the distance between the location and the starting point.
- a separate driving flag PassFlag is also set up, in which "0" means to continue driving, "1" is a dead end, and it is forbidden to go forward.
- the table shows an intersection, and the default angle is 90 degrees to the left.
- TurnINF is set to 1
- a top view of the target corresponding to the initial image is obtained by calculating through a nonlinear difference correction algorithm
- the current path trajectory corresponding to the vehicle driving state information can be obtained by adopting the same method for obtaining the path trajectory;
- the method further includes:
- the current path trajectory and the spatial position of the same point in the previously obtained path trajectory are corresponded to facilitate information fusion; a new path trajectory is obtained, and this method is used to verify the path trajectory, To ensure a more accurate abstract map, such as the lower garage map.
- the preset driving path can be repeatedly driven; at least three path trajectories are obtained;
- Multi-track fusion is performed on the current path track obtained each time with the path track obtained in the previous time or a new path track to obtain a more accurate map.
- the least squares method can be used to solve the multi-trajectory fusion; the details are as follows:
- the driving trajectory learned for the first time is used as the reference point set X
- the driving trajectory learned for the second time is used as the point set P to be fused.
- the reference point set X and the point set P to be fused are:
- E(R, t) represents the error function
- R represents the rotation matrix
- N t represents the translation matrix
- N p represents the number of elements in the point set P.
- centroid of the reference point set X and the point set P to be fused is:
- ⁇ x represents the centroid of the reference point set X
- ⁇ p represents the centroid of the point set P to be fused
- N p represents the number of elements in the point set P to be fused.
- X′ represents the set composed of the deviation of each element in the reference point set X from the centroid
- P' represents the set consisting of the deviation of each element in the point set P to be fused from the centroid.
- W represents the real matrix to be decomposed
- pi ′ T represents the transpose of pi ′
- U and V are unit orthogonal matrices, called left and right singular matrices respectively;
- V T represents the transpose of V, ⁇ 1 , ⁇ 2 , ⁇ 3 — singular values.
- the method before performing multi-track fusion on the current path trajectory and the previously obtained path trajectory and reconstructing the underground garage map, the method further includes:
- the preset first threshold may be 95%
- the current path trajectory and the previously obtained path trajectory may be fused .
- the generation process of the current path trajectory and the previously obtained path trajectory can be chosen to be abandoned , a path trajectory with a smaller number of target top-down views obtained;
- the matching degree between the current path trajectory and the preset driving path can be used to determine a path trajectory with a smaller number of target top views obtained during the generation process of the current path trajectory and the previously obtained path trajectory;
- the current path is abandoned track, and drive on the preset driving path again to obtain a new current path track again; so as to use the new current path track and the previously obtained path track to perform multi-track fusion and reconstruct the path track later;
- the map corresponding to the preset driving path may be reconstructed subsequently according to the reconstructed path trajectory.
- the previous acquisition is discarded.
- the path and trajectory of the vehicle is driven on the preset driving path again to obtain a new current path and trajectory again; so as to facilitate the subsequent use of the new current path and trajectory and the previously obtained current path and trajectory for multi-trajectory fusion, re- construct path trajectory;
- the map corresponding to the preset driving path may be reconstructed subsequently according to the reconstructed path trajectory.
- the path trajectory obtained by the above method in the present application is closer to the actual driving trajectory; it can not only improve the smoothness of vehicle control in automatic driving, but also reduce the risk of the vehicle deviating from the predetermined trajectory.
- the embodiment of the present invention acquires the vehicle driving state information and the preset corresponding to the vehicle driving state information in real time.
- An initial image of the surrounding environment of the driving path is set; according to the initial image, a target top view corresponding to the initial image is calculated by a nonlinear difference correction algorithm; the target top view is input into a preset deep learning model, and the input pre- Classifying the pixel points of the target top view of the set deep learning model to obtain a partition image, the partition image includes a drivable area image and a non-drivable area image; Scan the partition image to identify the drivable area of the vehicle; A path trajectory corresponding to the vehicle driving state information is generated based on the drivable area, where the path trajectory is one of the preset driving paths; using the technical solutions provided in the embodiments of this specification, the vehicle drives through the preset During the process of driving on the path, automatic learning is performed to obtain the path trajectory, so that the subsequent vehicles can automatically plan the path during the automatic driving process.
- FIG. 12 is a schematic structural diagram of a path construction apparatus provided by an embodiment of the present invention. specifically, the apparatus includes:
- the first acquisition module 110 is configured to acquire, in real time, vehicle driving state information and an initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information when driving on a preset driving path;
- the target top view acquisition module 120 is configured to obtain, according to the initial image, a target top view corresponding to the initial image through a nonlinear difference correction algorithm;
- the partition image acquisition module 130 is configured to input the top view of the target into a preset deep learning model, classify the pixels of the top view of the target input into the preset deep learning model, and obtain a partition image, where the partition image includes a Driving area images and non-driving area images;
- an identification module 140 configured to scan the partition image to identify the drivable area of the vehicle
- the path trajectory generation module 150 is configured to generate a path trajectory corresponding to the vehicle driving state information based on the drivable area, where the path trajectory is one of the preset driving paths.
- a map construction module is further included, configured to construct a map corresponding to the preset driving path based on the path trajectory.
- the first obtaining module 110 includes:
- a first acquiring unit configured to acquire, in real time, vehicle driving state information during the vehicle running on the preset driving path, where the vehicle driving state information includes the driving strategy and the driver's driving strategy during the vehicle traveling on the preset driving path driving habits;
- the second acquiring unit is configured to acquire, in real time, an initial image of the surrounding environment of the preset driving path during the process of the vehicle traveling on the preset driving path according to the driving strategy and the driving habit of the driver.
- the first obtaining unit includes:
- a first acquisition subunit used to acquire the vehicle speed and steering wheel angle in real time
- a first determination subunit configured to determine the forward mileage and heading angle of the vehicle according to the speed of the vehicle and the steering wheel angle
- a second determination subunit configured to determine a driving strategy of the vehicle according to the mileage of the vehicle and the heading angle, where the driving strategy of the vehicle includes the mileage on the drivable road and the drivable intersection whether to turn.
- the first obtaining unit further includes:
- a second acquisition subunit configured to acquire in real time the running data of the vehicle during the driving process of the preset driving route
- a third acquiring subunit configured to preprocess the running data of the vehicle to acquire target running data
- a feature extraction subunit used for inputting the target operating data into a recurrent neural network, and extracting features of the target operating data from the recurrent neural network;
- the driving habit determination subunit of the driver is used to input the feature into the fully connected network, and predict the driving habit of the driver during the driving process of the vehicle on the preset driving path; the driving habit includes the driving speed of the drivable road and The steering angle of the drivable intersection.
- the second acquisition module is configured to acquire, in real time, the vehicle driving state information and the initial image of the surrounding environment of the preset driving path corresponding to the vehicle driving state information when driving on the preset driving path again;
- a target top view acquisition module configured to calculate and obtain a target top view corresponding to the initial image through a nonlinear difference correction algorithm according to the initial image
- a partition image acquisition module configured to input the top view of the target into a preset deep learning model, classify the pixel points of the top view of the target input into the preset deep learning model, and obtain a partition image, where the partition image includes drivable Area images and non-drivable area images;
- an identification module for scanning the partition image to identify the drivable area of the vehicle
- a current path trajectory generation module configured to generate a current path trajectory corresponding to the vehicle driving state information based on the drivable area, where the path trajectory is one of the preset driving paths;
- a map reconstruction module configured to perform multi-trajectory fusion on the current path trajectory and the previously obtained path trajectory, and reconstruct the underground garage map.
- a coincidence degree judgment module used for judging whether the coincidence degree of the current path trajectory and the previously obtained path trajectory is greater than or equal to a preset first threshold
- a trajectory fusion module configured to fuse the current path trajectory with the previously obtained path trajectory if the coincidence degree of the current path trajectory and the previously obtained path trajectory is greater than or equal to a preset first threshold .
- a matching degree judgment module used for judging whether the matching degree of the current path trajectory and the preset driving path if the coincidence degree of the current path trajectory and the previously obtained path trajectory is less than a preset first threshold is less than the matching degree of the previously obtained path trajectory and the preset driving path;
- a current path trajectory reconstruction module configured to regenerate the The current path track.
- the target top view acquisition module 120 includes:
- a target image acquisition unit configured to obtain a target image based on the initial image, where the target image includes a top view of a region image that overlaps with the region where the target top view is located;
- a number acquisition unit used to acquire the number of times the target image appears
- a judgment unit configured to judge whether the number of times the target image appears is greater than or equal to a preset second threshold
- a feature point extraction unit configured to extract the feature points of the region images in each of the target images if so;
- the feature point matching unit is used for matching the feature points of each of the regional images to reconstruct the top view of the target.
- the target top view acquisition module 120 further includes:
- a corresponding relationship obtaining unit configured to obtain a corresponding relationship between the top view of the initial image and the initial image based on the nonlinear difference correction algorithm, where the corresponding relationship includes the top view of the mission image and the initial image Corresponding coordinate points between images;
- a target coordinate point acquiring unit configured to acquire a target coordinate point from the initial image based on the corresponding relationship
- a target top view construction unit configured to construct a target top view corresponding to the initial image based on the target coordinate points.
- the path trajectory generation module 150 includes:
- a first determining unit configured to determine, based on the drivable area, drivable roads, drivable intersections in the drivable area, and distribution of the drivable roads and the drivable intersections;
- a path trajectory generating unit configured to generate a path trajectory corresponding to the vehicle driving state information based on the distribution of the drivable roads and the drivable intersections.
- a scanning unit configured to scan the partition image by using a grid of preset size to obtain the drivable area and the scanning area of the vehicle;
- An adjustment unit configured to adjust the drivable area based on the scan area, and reconstruct the drivable area.
- the adjustment unit includes:
- a first adjustment subunit configured to perform an expansion operation on the drivable area based on the scanning area to obtain an expanded area
- the second adjustment sub-unit is configured to perform an erosion operation on the expansion area based on the scanning area to reconstruct the drivable area.
- the first determining unit includes:
- the first identification subunit is used to input the drivable area into a road recognition model, and identify the drivable road in the drivable area and the information of the drivable road, and the information of the drivable road includes the information of the drivable road. width and length;
- a second identification subunit configured to input the drivable area into an intersection identification model, and identify the drivable intersection in the drivable area and the type of the drivable intersection;
- a third determination subunit configured to determine the drivable road and the drivable road in the drivable area based on the information of the drivable area and the drivable road in the drivable area and the type of the drivable intersection. The distribution of driving intersections.
- An embodiment of the present invention provides a path construction terminal, the terminal includes a processor and a memory, and the memory stores at least one instruction or at least one piece of program, and the at least one instruction or the at least one piece of program is processed by the The loader is loaded and executed to implement the path construction method described in the above method embodiment.
- the memory can be used to store software programs and modules, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory.
- the memory may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, application programs required for functions, etc.; the stored data area may store data created according to the use of the device, and the like.
- the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide processor access to the memory.
- FIG. 13 is a schematic structural diagram of a path construction terminal according to an embodiment of the present invention.
- the internal structure of the path construction terminal may include, but is not limited to, a processor, a network interface, and a memory, wherein a processor, a network interface in the path construction terminal and the memory can be connected through a bus or other means, and in FIG. 13 shown in the embodiment of this specification, the connection through a bus is taken as an example.
- the processor (or called CPU (Central Processing Unit, central processing unit)) is the computing core and the control core of the path construction terminal.
- Optional network interfaces may include standard wired interfaces, wireless interfaces (such as WI-FI, mobile communication interfaces, etc.).
- Memory is a memory device in the path construction terminal, used to store programs and data. It can be understood that the memory here can be a high-speed RAM storage device, or a non-volatile storage device (non-volatile memory), such as at least one disk storage device; optionally, at least one storage device located far away from the aforementioned processing can also be used. storage device of the device.
- the memory provides storage space, and the storage space stores the operating system of the path construction terminal, which may include but not limited to: Windows system (an operating system), Linux (an operating system), etc., which is not limited in the present invention;
- the operating system of the path construction terminal which may include but not limited to: Windows system (an operating system), Linux (an operating system), etc., which is not limited in the present invention;
- one or more instructions suitable for being loaded and executed by the processor are also stored in the storage space, and these instructions may be one or more computer programs (including program codes).
- the processor loads and executes one or more instructions stored in the memory to implement the path construction method provided by the above method embodiments.
- Embodiments of the present invention further provide a computer-readable storage medium, where the storage medium can be set in a path construction terminal to store at least one instruction related to implementing a path construction method in the method embodiments, at least one instruction A piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the code set or the instruction set can be loaded and executed by the processor of the electronic device to implement the path construction method provided by the above method embodiments.
- the above-mentioned storage medium may include but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media that can store program codes, such as a disc or an optical disc.
Abstract
Description
Claims (18)
- 一种路径构建方法,其特征在于:所述的方法包括:在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;将所述目标俯视图输入预设的深度学习模型,对所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;扫描所述分区图像,识别出车辆的可行驶区域;基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
- 根据权利要求1所述的路径构建方法,其特征在于:所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:基于所述路径轨迹构建与所述预设驾驶路径对应的地图。
- 根据权利要求1所述的路径构建方法,其特征在于:所述实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像包括:实时获取车辆在预设驾驶路径上行驶过程中的车辆行驶状态信息,所述车辆行驶状态信息包括车辆在所述预设驾驶路径上行驶过程中的行驶策略和驾驶员的驾驶习惯;根据所述行驶策略和所述驾驶员的驾驶习惯,实时获取车辆在所述预设驾驶路径上行驶过程中预设驾驶路径周围环境的初始图像。
- 根据权利要求3所述的路径构建方法,其特征在于:所述可行驶区域包括可行驶道路和可行驶路口,获取车辆在所述预设驾驶路径上行驶过程中的行驶策略,包括:实时获取车辆的车速和方向盘转角;根据所述车辆的车速和所述方向盘转角,确定车辆的前进里程和航向角;根据所述车辆的前进里程和所述航向角确定车辆的行驶策略,所述车辆的行驶策略包括在所述可行驶道路上的前进里程以及在所述可行驶路口是否转弯。
- 根据权利要求3所述的路径构建方法,其特征在于:所述可行驶区域包括可行驶道路和可行驶路口;获取车辆在预设驾驶路径上行驶过程中驾驶员的驾驶习惯包括:实时获取车辆在预设驾驶路径的行驶过程中的运行数据;对所述车辆的运行数据进行预处理,获取目标运行数据;将所述目标运行数据输入循环神经网络,并从所述循环神经网络提取所述目标运行数据的特征;将所述特征输入全连接网络,预测得到车辆在预设驾驶路径的行驶过程中驾驶员的驾驶习惯;所述驾驶习惯包括可行驶道路的行驶速度和可行驶路口的转向角度。
- 根据权利要求1所述的路径构建方法,其特征在于:所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹之后还包括:再次在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;扫描所述分区图像,识别出车辆的可行驶区域;基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
- 根据权利要求6所述的路径构建方法,其特征在于:所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的当前路径轨迹之后还包括:对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图。
- 根据权利要求7所述的路径构建方法,其特征在于:所述对所述当前路径轨迹和前次获得的所述路径轨迹进行多轨迹融合,重构与所述预设驾驶路径对应的地图之前,还包括:判断所述当前路径轨迹与前次获得的所述路径轨迹的重合度是否大于等于预设第一阈值;若所述当前路径轨迹与前次获得的所述路径轨迹的重合度大于等于预设第一阈值,则将所述当前路径轨迹与前次获得的所述路径轨迹进行融合。
- 根据权利要求8所述的路径构建方法,其特征在于:还包括:若所述当前路径轨迹与前次获得的所述路径轨迹的重合度小于预设第一阈值,则判断所述当前路径轨迹与所述预设驾驶路径的匹配度是否小于前次获得的所述路径轨迹与所述预设驾驶路径的匹配度;若是,则重新生成所述当前路径轨迹。
- 根据权利要求1所述的路径构建方法,其特征在于:所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图包括:基于所述初始图像获得目标图像,所述目标图像包含与所述目标俯视图所在区域重合的区域图像的俯视图;获取所述目标图像出现的次数;判断所述目标图像出现的次数是否大于等于预设第二阈值;若是,则提取每个所述目标图像中所述区域图像的特征点;将每个所述区域图像的特征点进行匹配,重构所述目标俯视图。
- 根据权利要求1所述的路径构建方法,其特征在于:所述根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图,还包括:基于所述非线性差值校正算法获取所述初始图像的俯视图与所述初始图像之间的对应关系,所述对应关系包括所述初始图像的俯视图和所述初始图像之间对应的坐标点;基于所述对应关系从所述初始图像中获取目标坐标点;基于所述目标坐标点构建与所述初始图像对应的目标俯视图。
- 根据权利要求1所述的路径构建方法,其特征在于:所述可行驶区域包括可行驶道路和可行驶路口,所述基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹包括:基于所述可行驶区域,确定所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布;基于所述可行驶道路和所述可行驶路口的分布生成与所述车辆行驶状态信息对应的路径轨迹。
- 根据权利要求12所述的路径构建方法,其特征在于:所述识别所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布之前,还包括:采用预设尺寸的方格对所述分区图像进行扫描,得到车辆的可行驶区域和扫描区域;基于所述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域。
- 根据权利要求13所述的路径构建方法,其特征在于:所述基于所 述扫描区域,对所述可行驶区域进行调整,重构所述可行驶区域,包括:基于所述扫描区域,对所述可行驶区域进行膨胀操作,得到膨胀区域;基于所述扫描区域,对所述膨胀区域进行腐蚀操作,重构所述可行驶区域。
- 根据权利要求12所述的路径构建方法,其特征在于:识别所述可行驶区域中的可行驶道路、可行驶路口以及所述可行驶道路和所述可行驶路口的分布包括:将所述可行驶区域输入道路识别模型,识别出所述可行驶区域中的可行驶道路以及可行驶道路的信息,所述可行驶道路的信息包括可行驶道路的宽度和长度;将所述可行驶区域输入路口识别模型,识别出所述可行驶区域中的可行驶路口以及可行驶路口的类型;基于所述可行驶区域以及所述可行驶区域中的可行驶道路的信息和可行驶路口的类型,确定所述可行驶区域中所述可行驶道路和所述可行驶路口的分布。
- 一种路径构建装置,其特征在于:所述的装置包括:第一获取模块,用于在预设驾驶路径上行驶时,实时获取车辆行驶状态信息以及所述车辆行驶状态信息对应的预设驾驶路径周围环境的初始图像;目标俯视图获取模块,用于根据所述初始图像,通过非线性差值校正算法计算得到与所述初始图像对应的目标俯视图;分区图像获取模块,用于将所述目标俯视图输入预设的深度学习模型,对输入预设的深度学习模型的所述目标俯视图的像素点进行分类,得到分区图像,所述分区图像包括可行驶区域图像和非可行驶区域图像;识别模块,用于扫描所述分区图像,识别出车辆的可行驶区域;路径轨迹生成模块,用于基于所述可行驶区域生成与所述车辆行驶状态信息对应的路径轨迹,所述路径轨迹为所述预设驾驶路径中的一条路径。
- 一种路径构建终端,其特征在于:所述终端包括处理器和存储器,所述存储器中存储有至少一条指令或至少一段程序,所述至少一条指令或所述至少一段程序由所述处理器加载并执行以实现如权利要求1至15任一项所述的路径构建方法。
- 一种计算机可读存储介质,其特征在于:所述存储介质中存储有至少一条指令或至少一段程序,所述至少一条指令或所述至少一段程序由处理器加载并执行如权利要求1至15任一项所述的路径构建方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/137305 WO2022165614A1 (zh) | 2021-02-08 | 2021-02-08 | 一种路径构建方法、装置、终端及存储介质 |
CN202080108019.1A CN117015814A (zh) | 2021-02-08 | 2021-02-08 | 一种路径构建方法、装置、终端及存储介质 |
EP20968180.8A EP4296888A1 (en) | 2021-02-08 | 2021-02-08 | Path construction method and apparatus, terminal, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/137305 WO2022165614A1 (zh) | 2021-02-08 | 2021-02-08 | 一种路径构建方法、装置、终端及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022165614A1 true WO2022165614A1 (zh) | 2022-08-11 |
Family
ID=82742572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/137305 WO2022165614A1 (zh) | 2021-02-08 | 2021-02-08 | 一种路径构建方法、装置、终端及存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4296888A1 (zh) |
CN (1) | CN117015814A (zh) |
WO (1) | WO2022165614A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117234220A (zh) * | 2023-11-14 | 2023-12-15 | 中国市政工程西南设计研究总院有限公司 | 一种prt智能小车行驶控制方法及系统 |
CN117356546A (zh) * | 2023-12-01 | 2024-01-09 | 南京禄口国际机场空港科技有限公司 | 一种机场草坪用的喷雾车的控制方法、系统及存储介质 |
CN117495847A (zh) * | 2023-12-27 | 2024-02-02 | 安徽蔚来智驾科技有限公司 | 路口检测方法、可读存储介质及智能设备 |
CN117870713A (zh) * | 2024-03-11 | 2024-04-12 | 武汉视普新科技有限公司 | 基于大数据车载影像的路径规划方法及系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6025790A (en) * | 1997-08-04 | 2000-02-15 | Fuji Jukogyo Kabushiki Kaisha | Position recognizing system of autonomous running vehicle |
CN108388641A (zh) * | 2018-02-27 | 2018-08-10 | 广东方纬科技有限公司 | 一种基于深度学习的交通设施地图生成方法与系统 |
CN111325799A (zh) * | 2018-12-16 | 2020-06-23 | 北京初速度科技有限公司 | 一种大范围高精度的静态环视自动标定图案及系统 |
CN111753639A (zh) * | 2020-05-06 | 2020-10-09 | 上海欧菲智能车联科技有限公司 | 感知地图生成方法、装置、计算机设备和存储介质 |
CN112212872A (zh) * | 2020-10-19 | 2021-01-12 | 合肥工业大学 | 基于激光雷达和导航地图的端到端自动驾驶方法及系统 |
CN112270306A (zh) * | 2020-11-17 | 2021-01-26 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于拓扑路网的无人车轨迹预测与导航方法 |
-
2021
- 2021-02-08 EP EP20968180.8A patent/EP4296888A1/en active Pending
- 2021-02-08 WO PCT/CN2020/137305 patent/WO2022165614A1/zh active Application Filing
- 2021-02-08 CN CN202080108019.1A patent/CN117015814A/zh active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6025790A (en) * | 1997-08-04 | 2000-02-15 | Fuji Jukogyo Kabushiki Kaisha | Position recognizing system of autonomous running vehicle |
CN108388641A (zh) * | 2018-02-27 | 2018-08-10 | 广东方纬科技有限公司 | 一种基于深度学习的交通设施地图生成方法与系统 |
CN111325799A (zh) * | 2018-12-16 | 2020-06-23 | 北京初速度科技有限公司 | 一种大范围高精度的静态环视自动标定图案及系统 |
CN111753639A (zh) * | 2020-05-06 | 2020-10-09 | 上海欧菲智能车联科技有限公司 | 感知地图生成方法、装置、计算机设备和存储介质 |
CN112212872A (zh) * | 2020-10-19 | 2021-01-12 | 合肥工业大学 | 基于激光雷达和导航地图的端到端自动驾驶方法及系统 |
CN112270306A (zh) * | 2020-11-17 | 2021-01-26 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于拓扑路网的无人车轨迹预测与导航方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117234220A (zh) * | 2023-11-14 | 2023-12-15 | 中国市政工程西南设计研究总院有限公司 | 一种prt智能小车行驶控制方法及系统 |
CN117234220B (zh) * | 2023-11-14 | 2024-03-01 | 中国市政工程西南设计研究总院有限公司 | 一种prt智能小车行驶控制方法及系统 |
CN117356546A (zh) * | 2023-12-01 | 2024-01-09 | 南京禄口国际机场空港科技有限公司 | 一种机场草坪用的喷雾车的控制方法、系统及存储介质 |
CN117356546B (zh) * | 2023-12-01 | 2024-02-13 | 南京禄口国际机场空港科技有限公司 | 一种机场草坪用的喷雾车的控制方法、系统及存储介质 |
CN117495847A (zh) * | 2023-12-27 | 2024-02-02 | 安徽蔚来智驾科技有限公司 | 路口检测方法、可读存储介质及智能设备 |
CN117495847B (zh) * | 2023-12-27 | 2024-03-19 | 安徽蔚来智驾科技有限公司 | 路口检测方法、可读存储介质及智能设备 |
CN117870713A (zh) * | 2024-03-11 | 2024-04-12 | 武汉视普新科技有限公司 | 基于大数据车载影像的路径规划方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN117015814A (zh) | 2023-11-07 |
EP4296888A1 (en) | 2023-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022165614A1 (zh) | 一种路径构建方法、装置、终端及存储介质 | |
CN111874006B (zh) | 路线规划处理方法和装置 | |
Cultrera et al. | Explaining autonomous driving by learning end-to-end visual attention | |
US20220165043A1 (en) | Photorealistic Image Simulation with Geometry-Aware Composition | |
DE102019119162A1 (de) | Posenschätzung | |
Wang et al. | End-to-end autonomous driving: An angle branched network approach | |
DE102020113848A1 (de) | Ekzentrizitätsbildfusion | |
CN108107897B (zh) | 实时传感器控制方法及装置 | |
JP6778842B2 (ja) | 画像処理方法およびシステム、記憶媒体およびコンピューティングデバイス | |
US20160253567A1 (en) | Situation analysis for a driver assistance system | |
DE102022114201A1 (de) | Neuronales Netz zur Objekterfassung und -Nachverfolgung | |
DE102021101270A1 (de) | Trainieren eines neuronalen netzwerks eines fahrzeugs | |
US11465620B1 (en) | Lane generation | |
CN113479105A (zh) | 一种基于自动驾驶车辆的智能充电方法及智能充电站 | |
CN111210411B (zh) | 图像中灭点的检测方法、检测模型训练方法和电子设备 | |
DE102021109389A1 (de) | Schätzung einer virtuellen fahrspur mittels einer rekursiven selbstorganisierenden karte | |
CN111830949B (zh) | 自动驾驶车辆控制方法、装置、计算机设备和存储介质 | |
Wang et al. | Bevgpt: Generative pre-trained large model for autonomous driving prediction, decision-making, and planning | |
Holder et al. | Learning to drive: End-to-end off-road path prediction | |
WO2023192397A1 (en) | Capturing and simulating radar data for autonomous driving systems | |
CN108363387B (zh) | 传感器控制方法及装置 | |
CN114771510A (zh) | 基于路线图的泊车方法、泊车系统及电子设备 | |
US20240133696A1 (en) | Path construction method and apparatus, terminal, and storage medium | |
CN111077893A (zh) | 一种基于多灭点的导航方法、电子设备和存储介质 | |
CN114620059B (zh) | 一种自动驾驶方法及其系统、计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20968180 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202080108019.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18276332 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020968180 Country of ref document: EP Effective date: 20230908 |