WO2021017211A1 - Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal - Google Patents

Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal Download PDF

Info

Publication number
WO2021017211A1
WO2021017211A1 PCT/CN2019/113488 CN2019113488W WO2021017211A1 WO 2021017211 A1 WO2021017211 A1 WO 2021017211A1 CN 2019113488 W CN2019113488 W CN 2019113488W WO 2021017211 A1 WO2021017211 A1 WO 2021017211A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
point
coordinate system
pose
edge feature
Prior art date
Application number
PCT/CN2019/113488
Other languages
French (fr)
Chinese (zh)
Inventor
李天威
徐抗
刘一龙
童哲航
Original Assignee
魔门塔(苏州)科技有限公司
北京初速度科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司, 北京初速度科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Publication of WO2021017211A1 publication Critical patent/WO2021017211A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the present invention relates to the technical field of intelligent driving, in particular to a vision-based vehicle positioning method, device and vehicle-mounted terminal.
  • positioning the vehicle is an important part of intelligent driving.
  • the vehicle's pose can be determined according to the satellite positioning system.
  • the positioning can be performed based on the visual positioning.
  • Vision-based positioning is usually based on the matching between the semantic information of the road image collected by the camera device and the semantic information in the high-precision map.
  • the semantic information in the high-precision map is modeled based on common landmarks on the road. Markers generally include lane lines on the ground, ground marking lines, traffic signs, and street light poles.
  • the above-mentioned vision-based positioning method can effectively determine the vehicle positioning pose.
  • the markers in the scene are scarce or even there are no markers, it is difficult for the high-precision map to give enough effective information for visual positioning; or when the markers cannot be completely matched with the high-precision map due to occlusion or aging, visual positioning It may not be possible. All of the above results in low effectiveness of visual positioning.
  • the invention provides a vision-based vehicle positioning method, device and vehicle-mounted terminal to improve the effectiveness of vehicle positioning based on vision.
  • the specific technical solution is as follows.
  • an embodiment of the present invention discloses a vision-based vehicle positioning method, including:
  • the initial positioning pose is the pose in the world coordinate system where the preset map is located;
  • a target map point corresponding to the road image is determined from the preset map; wherein, each map point in the preset map is: an edge feature map of the sample road image in advance The points in are obtained after 3D reconstruction and selection;
  • the mapping difference between the target map point and the point in the edge feature map is determined, and the vehicle positioning pose is determined according to the projection difference.
  • the step of determining the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determining the vehicle positioning pose according to the mapping difference includes:
  • mapping difference When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the Steps of mapping points in the edge feature map to the same coordinate system;
  • the vehicle positioning pose is determined according to the current value of the estimated pose.
  • the target map point and the point in the edge feature map are mapped to the same coordinate system according to the value of the estimated pose, and the target map mapped to the same coordinate system is determined
  • the step of mapping the difference between the point and the point in the edge feature map includes:
  • the conversion matrix between the world coordinate system and the camera coordinate system.
  • the target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
  • the conversion matrix between the world coordinate system and the camera coordinate system determines the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
  • the step of determining a target map point corresponding to the road image from the preset map according to the initial positioning pose includes:
  • the map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
  • the step of screening map points within the collection range of the camera device from each candidate map point to obtain a target map point corresponding to the road image includes:
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
  • a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  • the determined map points to be selected include the coordinate positions and normal vectors of the map points to be selected; the map points within the collection range of the camera device are selected from each map point to be selected, and the corresponding map points to the road are obtained.
  • the steps of the target map point corresponding to the image include:
  • a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  • each map point in the preset map is constructed in the following manner:
  • a map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
  • an embodiment of the present invention discloses a vision-based vehicle positioning device, including:
  • the road image acquisition module is configured to acquire the road image collected by the camera device
  • the initial pose determination module is configured to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
  • the map point determination module is configured to determine a target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: Perform three-dimensional reconstruction and selection of points in the edge feature map of the sample road image;
  • An edge feature extraction module configured to extract the edge feature map of the road image according to a preset edge strength
  • the vehicle pose determination module is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
  • the initial pose determination module is specifically configured as:
  • mapping difference When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the The operation of mapping points in the edge feature map to the same coordinate system;
  • the vehicle positioning pose is determined according to the current value of the estimated pose.
  • the initial pose determination module maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose, and determines the mapping to the same coordinate system
  • the mapping difference between the target map point in and the point in the edge feature map includes:
  • the conversion matrix between the world coordinate system and the camera coordinate system.
  • the target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
  • the conversion matrix between the world coordinate system and the camera coordinate system determines the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
  • map point determination module is specifically configured as:
  • the map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
  • map point determination module selects map points within the collection range of the camera device from each map point to be selected, and obtains the target map point corresponding to the road image, it includes:
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
  • a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  • the determined to-be-selected map points include the coordinate positions and normal vectors of the to-be-selected map points; the map point determination module selects map points within the collection range of the camera device from each of the to-be-selected map points, When obtaining the target map point corresponding to the road image, it includes:
  • the target map point corresponding to the road image is obtained by filtering from each candidate map point.
  • the device further includes: a map point construction module configured to construct each map point in the preset map by using the following operations:
  • a map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
  • an embodiment of the present invention discloses a vehicle-mounted terminal, including: a processor, a camera device, and a motion detection device; the processor includes:
  • Road image acquisition module for acquiring road images collected by camera equipment
  • the initial pose determination module is used to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
  • the map point determination module is configured to determine a target map point corresponding to the road image from a preset map according to the initial positioning pose; wherein, each map point in the preset map is: pre-complying sample roads The points in the edge feature map of the image are obtained after three-dimensional reconstruction and selection;
  • the edge feature extraction module is used to extract the edge feature map of the road image according to the preset edge strength
  • the vehicle pose determination module is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
  • the initial pose determination module is specifically used for:
  • mapping difference When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the The operation of mapping points in the edge feature map to the same coordinate system;
  • the vehicle positioning pose is determined according to the current value of the estimated pose.
  • the initial pose determination module maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose, and determines the mapping to the same coordinate system
  • the mapping difference between the target map point in and the point in the edge feature map includes:
  • the conversion matrix between the world coordinate system and the camera coordinate system.
  • the target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
  • the conversion matrix between the world coordinate system and the camera coordinate system determines the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
  • map point determination module is specifically used for:
  • the map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
  • map point determination module selects map points within the collection range of the camera device from each to-be-selected map point to obtain the target map point corresponding to the road image, it includes:
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
  • a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  • the determined to-be-selected map points include the coordinate positions and normal vectors of the to-be-selected map points; the map point determination module selects map points within the collection range of the camera device from each of the to-be-selected map points, When obtaining the target map point corresponding to the road image, it includes:
  • a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  • the processor further includes: a map point construction module, configured to construct each map point in the preset map by using the following operations:
  • a map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
  • the vision-based vehicle positioning method, device, and vehicle-mounted terminal provided by the embodiments of the present invention can determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device, and then according to the initial positioning pose from
  • the target map point corresponding to the road image is determined in the preset map
  • the vehicle positioning pose is determined according to the initial positioning pose and the mapping difference between the target map point and the points in the edge feature map of the road image.
  • the map points are obtained by three-dimensional reconstruction and selection of points in the edge feature map of the sample road image in advance. Because the edge feature map of the road image can extract the structured features in the image, these structured features will be richer and anti-noise.
  • the embodiments of the present invention can improve the effectiveness of vehicle positioning based on vision.
  • implementing any product or method of the present invention does not necessarily need to achieve all the advantages described above at the same time.
  • the edge feature map is the structural feature in the road image, which is rich and anti-noise, and is not easily affected by the number of lane lines, traffic signs, street light poles and other landmarks in the scene, and it is not easily affected by changes in the morning, evening, and weather. It is more robust due to the influence of illumination. Therefore, matching the points in the edge feature map with the map points in the preset map for vehicle positioning can improve the effectiveness of vehicle positioning.
  • the map points within the collection range of the camera device are filtered as the target map points, which can be selected from the preset map.
  • the map points can improve the accuracy of the determined vehicle positioning pose.
  • FIG. 1 is a schematic structural diagram of a vision-based vehicle positioning method provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the angle between the camera device and the normal vector of the map point provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of an architecture of a vision-based vehicle positioning method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a vision-based vehicle positioning device provided by an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention.
  • the embodiment of the invention discloses a vision-based vehicle positioning method, a processor, and a vehicle-mounted terminal, which can improve the effectiveness of vehicle positioning based on vision.
  • the embodiments of the present invention will be described in detail below.
  • FIG. 1 is a schematic flowchart of a vision-based vehicle positioning method provided by an embodiment of the present invention.
  • the electronic device may be an ordinary computer, a server, or an intelligent terminal device, etc., and may also be an in-vehicle terminal such as an in-vehicle computer or an in-vehicle industrial control computer (IPC).
  • the vehicle-mounted terminal may be installed in a vehicle, and the vehicle refers to a smart vehicle.
  • sensors are installed in the vehicle, including sensors such as camera equipment and motion detection equipment. There may be one or more camera devices provided in the vehicle.
  • the motion detection device may include sensors such as an Inertial Measurement Unit (IMU) and/or a wheel speedometer.
  • IMU Inertial Measurement Unit
  • the camera device can collect road images at a preset frequency.
  • the road image may include road markers or image data of any other objects within the image collection range of the camera device.
  • the above-mentioned road image may be collected by one camera device installed in the front of the vehicle, or obtained by stitching images collected by multiple cameras installed in the front of the vehicle.
  • the place where the vehicle is located can be outdoors or a parking lot.
  • the road image may be an image of the surrounding environment of the vehicle collected by the camera device when the vehicle is driving on various roads.
  • the road can be any place where vehicles can travel, such as urban roads, rural roads, mountain roads, parking lot roads, etc., and the images collected during the process of entering the parking space can also be included in the road image.
  • S120 Determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device.
  • the initial positioning pose is the pose in the world coordinate system where the preset map is located.
  • the preset map may be a high-precision map installed in the vehicle.
  • the time of the data collected by the motion detection device and the time of the road image collected by the camera device may be related to each other. For example, the collection time of the two may be the same time or a time with a short time difference.
  • determining the initial positioning pose corresponding to the road image may specifically include: obtaining the previous positioning pose, and determining the relationship with the road image based on the previous positioning pose and the data collected by the motion detection device The corresponding initial positioning pose.
  • the last positioning pose may be the determined positioning pose of the vehicle at the last moment.
  • this step may further include: according to the data collected by the motion detection device and the result of matching the road feature in the road image with the road feature in the preset map, determining the vehicle's corresponding to the road image Initial positioning pose.
  • matching the road feature in the road image with the road feature in the preset map corresponds to another vision-based positioning method.
  • the motion detection device may also include a global positioning system (Global Positioning System, GPS).
  • GPS Global Positioning System
  • the motion detection equipment includes GPS, it is possible to eliminate the accumulated errors in the positioning process according to the IMU and/or wheel speedometer as much as possible, and improve the accuracy of the positioning pose.
  • the initial positioning pose determined in this step is the initial positioning pose of the vehicle at the current moment when the road image is collected, and it is used to determine a more accurate vehicle positioning pose at the current moment.
  • S130 Determine the target map point corresponding to the road image from the preset map according to the initial positioning pose.
  • Determining the target map point corresponding to the road image from the preset map can be understood as determining the location information of the target map point corresponding to the road image from the preset map.
  • the target map point is a map point that may be observed in the road image.
  • a map point within a preset range near the position in the preset map can be used as a target map point.
  • each map point in the preset map is obtained after three-dimensional reconstruction and selection of points in the edge feature map of the sample road image in advance.
  • the corresponding relationship between each map point and location information is stored in a preset map, and the location information is the location information of the map point in the world coordinate system.
  • the position information of each map point in the world coordinate system includes: the coordinate position of each map point in the world coordinate system and the normal vector information of the map point.
  • the normal vector information of the map point indicates the normal vector of the plane where the map point is located.
  • the coordinate position of the map point can be represented by three coordinates (a, b, c), and the normal vector information of the point can be represented by three parameters (A, B, C).
  • S140 Extract the edge feature map of the road image according to the preset edge strength.
  • the edge in an image refers to a collection of pixels with a step change in the gray level of surrounding pixels.
  • the gray values of the pixels on both sides of the edge points are significantly different.
  • Edge strength can be understood as the magnitude of the edge point gradient.
  • the edge feature map can be understood as data containing edge features consistent with the size of the road image.
  • the edge feature map contains the position information of the edge lines in the road image.
  • the edge line is composed of edge points.
  • the extracted edge feature map is the position information expressed in the image coordinate system where the road image is located.
  • the edge points in the edge feature map are the positions in the image coordinate system.
  • the above extraction of the edge feature map of the road image can be understood as the feature extraction of the road image to obtain the edge feature map.
  • the preset edge strength can be used as the threshold, and edges with edge strength greater than the threshold can be extracted from the road image to obtain the edge feature map. It is also possible to extract edges with greater edge strength in the local range of the road image. For example, in a column with a ridge, the gradient of the ridge inside the column may not be large in the entire road image, but The edge strength of the ridge line is relatively high in the range of the column, so it is also the object of edge extraction.
  • the above two schemes can also be combined.
  • the edge feature map extracted according to the preset edge strength can reflect the structural features in the road image. Specifically, canny operator, sobel operator and LoG operator can be used to extract the edge feature map of the road image.
  • Gaussian Blur algorithm processing can also be performed on the values in the edge feature map, which can make the processed edge feature map smoother.
  • this step may further include: extracting an edge feature map of the road image based on the edge feature extraction model.
  • the edge feature extraction model is obtained by training based on the sample road image and the edge feature map labeled according to the preset edge strength.
  • the edge feature extraction model associates the road image with the corresponding edge feature map.
  • extracting the edge feature map of the road image may include: inputting the road image into the edge feature extraction model, and obtaining the edge feature map of the road image output by the edge feature extraction model.
  • edge feature extraction model extracts the feature vector of the sample road image according to the model parameters, and regresses the feature vector to obtain The reference edge feature of the sample road image; compare the reference edge feature with the labeled edge feature to obtain the difference amount; when the difference amount is greater than the preset difference threshold, return to the step of inputting the sample road image into the edge feature extraction model; When the amount of difference is not greater than the preset difference threshold, it is determined that the edge feature extraction model training is completed.
  • the marked edge feature reflects the preset edge strength.
  • the edge features in the sample road image are labeled according to the preset edge strength as the standard.
  • the edge feature extraction model trained by the above-mentioned machine learning method can extract more accurate edge features.
  • the edge feature extraction model can also be called an edge detector.
  • S150 Determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
  • the initial positioning pose can reflect the pose of the vehicle within a certain accuracy range.
  • the positioning pose of the vehicle is determined according to the above-mentioned projection difference in this embodiment.
  • the mapping difference between the target map point and the point in the edge feature map is less than the preset difference threshold.
  • this embodiment can determine the initial positioning pose corresponding to the road image based on the data collected by the motion detection device, and determine the target map point corresponding to the road image from the preset map according to the initial positioning pose.
  • the initial positioning pose and the mapping difference between the target map points and the points in the edge feature map of the road image are used to determine the vehicle positioning pose.
  • the map points in the preset map are the points in the edge feature map of the sample road image in advance The three-dimensional reconstruction performed and selected. Because the edge feature map of the road image can extract the structured features in the image, these structured features will be richer and anti-noise. Even if the markers in the scene are sparse or the markers are occluded, they can pass through the target map points and edge feature maps. The match between the points in determines the vehicle positioning pose. Therefore, this embodiment can improve the effectiveness when positioning the vehicle based on vision.
  • the difference between this embodiment and the traditional vision-based vehicle positioning method also includes that the traditional positioning method detects semantic features corresponding to artificial objects such as lane lines, sidewalks, traffic signs, and street light poles in road images. If these semantic features are not present in the scene, vehicle positioning may not be possible.
  • the edge features extracted from the road image in this embodiment are more generalized, including the edge features of man-made objects and natural objects in the scene, and are more adaptable, which makes the vehicle positioning process more effective.
  • the present embodiment extracts structured features, it is more robust to the influence of illumination when morning and evening, weather, etc. change.
  • the traditional method of extracting feature points from the image is not anti-noise in the case of lighting changes; the traditional trained method does not generalize the scene and requires a lot of data training to ensure the effectiveness of the scheme.
  • the solution of this embodiment can be used as a mapping and positioning solution across morning and evening and/or seasons.
  • step S150 the mapping difference between the target map point and the point in the edge feature map is determined according to the initial positioning pose, and the mapping difference is determined according to the mapping difference
  • the steps for positioning the vehicle pose include the following steps 1a to 3a.
  • Step 1a Use the initial positioning pose as the initial value of the estimated pose, and according to the estimated pose value, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to the same coordinate system The mapping difference between the target map point and the point in the edge feature map.
  • the target map point is located in the world coordinate system
  • the point in the edge feature map is located in the image coordinate system.
  • the location information of the target map point can be mapped to the image coordinate system, or the point in the edge feature map can be mapped to In the world coordinate system.
  • the same coordinate system mentioned above may be an image coordinate system, a world coordinate system, or other coordinate systems.
  • Determining the mapping difference between the target map point mapped to the same coordinate system and the point in the edge feature map can be understood as determining the location information of the target map point mapped to the same coordinate system and the point in the edge feature map
  • the difference between the location information is regarded as the mapping difference.
  • the above difference can also be called residual.
  • the pixel coordinate obtained after mapping the target map point to the image coordinate system is u
  • I(u) represents the response of the pixel coordinate u when the target map point is mapped to the edge feature map
  • Step 2a When the mapping difference is greater than the preset difference threshold, modify the estimated pose value according to the mapping difference, and return to perform step 1a according to the estimated pose value to map the target map point and the point in the edge feature map to Steps in the same coordinate system.
  • mapping difference is greater than the preset difference threshold, it is considered that the estimated pose is still far from the real pose of the vehicle, and the estimated pose can be modified continuously, and the iterative process can be repeated.
  • the estimated pose value according to the mapping difference it can be modified based on the Jacobian matrix, Hessian matrix, Gauss-Newton iteration method or Levenberg-Marquardt (LM) algorithm, according to the mapping difference. Estimate the value of the pose.
  • Jacobian matrix Hessian matrix
  • Gauss-Newton iteration method or Levenberg-Marquardt (LM) algorithm
  • Step 3a When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
  • the vehicle positioning pose can be determined. Specifically, the current value of the estimated pose can be directly determined as the vehicle positioning pose, or the current value of the estimated pose can be determined as Vehicle positioning pose.
  • the value of the estimated pose may be modified according to the mapping difference, and step 1a may be returned to, or the vehicle positioning pose may be determined according to the current value of the estimated pose.
  • the value of the estimated pose is continuously adjusted to obtain the mapping difference between the target map point and the point in the edge feature map, so that the estimated pose is The value gradually approaches the true value, so that the vehicle positioning pose is solved iteratively, so that the solved vehicle positioning pose is more accurate.
  • step 1a according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to
  • step 1a according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to
  • step of mapping differences between the target map points in the same coordinate system and the points in the edge feature map may include the following embodiments.
  • the conversion matrix between the world coordinate system and the camera coordinate system is determined according to the value of the estimated pose, and according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, the target map The point is mapped to the image coordinate system to obtain the first mapping position of the target map point, and the projection difference between the first mapping position and the position of the point in the edge feature map in the image coordinate system is calculated.
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located
  • the image coordinate system is the coordinate system where the road image is located.
  • the estimated pose is the pose of the vehicle in the world coordinate system. According to the value of the estimated pose, the conversion matrix between the world coordinate system and the camera coordinate system can be determined.
  • the target map point when the target map point is mapped to the image coordinate system, the target map point can be converted to the camera coordinate system according to the following formula:
  • p c is the position of the target map point in the camera coordinate system
  • p w is the position of the target map point in the world coordinate system
  • ⁇ (.) represents the projection model of the camera device
  • u represents the target map point in the image coordinate system.
  • the second embodiment is to determine the conversion matrix between the world coordinate system and the camera coordinate system according to the value of the estimated pose, and according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, the edge feature map
  • the point of is mapped to the world coordinate system to obtain the second mapping position of the point in the edge feature map, and the projection difference between the second mapping position and the position of the target map point in the world coordinate system is calculated.
  • the location information of the target map point can be mapped to the image coordinate system, or the point in the edge feature map can be mapped
  • a specific implementation method is provided for determining the mapping difference.
  • step S130 the step of determining the target map point corresponding to the road image from the preset map according to the initial positioning pose, includes steps 1b and 2b.
  • Step 1b Take the position of the initial positioning pose in the preset map as the center, and use the preset distance as the radius to determine the map point contained in the sphere as the candidate map point.
  • the preset distance may be a distance value determined based on experience. There can be multiple determined map points to be selected.
  • Step 2b Filter the map points within the collection range of the camera device from each to-be-selected map point to obtain the target map point corresponding to the road image.
  • the target map points obtained through the above-mentioned screening process are map points that may be observed in the road image.
  • the map points within the collection range of the camera device are filtered as the target map points. Select effective map points from the preset maps to improve the accuracy of the determined vehicle positioning pose.
  • step 2b the map points within the collection range of the camera device are selected from the map points to be selected to obtain the target map point corresponding to the road image
  • the steps include the following steps 2b-1 to 2b-3.
  • Step 2b-1 Determine the conversion matrix between the world coordinate system and the camera coordinate system according to the initial positioning pose.
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located.
  • Step 2b-2 According to the conversion matrix, map each to-be-selected map point to the camera coordinate system to obtain the third mapping position of each to-be-selected map point.
  • the map points to be selected can be mapped to the camera coordinate system according to the following formula:
  • p b is the third mapping position of each candidate map point in the camera coordinate system
  • p w is the position information of each candidate map point in the world coordinate system
  • T is the above conversion matrix
  • p is any candidate Map points
  • A is a point set composed of map points to be selected.
  • the map points to be selected can be screened by replacing the camera coordinate system with the vehicle body coordinate system according to the known conversion relationship between the camera coordinate system and the vehicle body coordinate system.
  • Step 2b-3 According to the filtering condition that the third mapping position is within the collection range of the camera device in the vertical height direction, the target map points corresponding to the road image are filtered from each candidate map point.
  • the acquisition range of the camera device in the vertical height direction can be expressed as the range of the z axis [z1, z2].
  • [z1, z2] can be [-0.1m, 4m].
  • the map point to be selected whose value of the z-axis is within the collection range may be determined as the target map point.
  • the filtered target map points are still expressed in the coordinates of the world coordinate system.
  • This step may specifically include filtering the candidate map points whose third mapping position is within the collection range of the camera device in the vertical height direction as target map points corresponding to the road image.
  • this embodiment filters the map points to be selected according to the range of the camera device in the height direction to obtain the target map points, which can filter out the map points outside the secondary height range from the map points to be selected.
  • the determined map point to be selected includes the coordinate position and normal vector of the map point to be selected.
  • the location information of each map point to be selected includes coordinate location and normal vector.
  • Step 2b screening the map points within the collection range of the camera device from each candidate map point to obtain the target map point corresponding to the road image, including the following steps 2b-4 to 2b-6.
  • Step 2b-4 Determine the connection line between the camera device and each candidate map point according to the coordinate position of each candidate map point.
  • the coordinate position of the map point to be selected is the position in the world coordinate system.
  • the position of the camera device in the world coordinate system can be determined by the initial positioning pose.
  • Step 2b-5 Calculate the angle between each line and the normal vector of the corresponding map point to be selected.
  • the four candidate map points A, B, C, and D are connected to the camera device O as AO, BO, CO, and DO, respectively.
  • Calculate the clip between the connection AO and the normal vector of the candidate map point A Angle calculate the angle between the line BO and the normal vector of the candidate map point B, calculate the angle between the line CO and the normal vector of the candidate map point C, calculate the line DO and the candidate map point D
  • Step 2b-6 According to the above-mentioned screening condition that the included angle is within the preset included angle range, the target map points corresponding to the road image are filtered from each candidate map point.
  • This step may specifically include screening the candidate map points whose included angles are within a preset included angle range as target map points corresponding to the road image.
  • the preset included angle range can be determined in advance based on experience.
  • FIG. 2 is a schematic diagram of the angle between the camera device and the normal vector of the map point provided in this embodiment.
  • the incident light is emitted from the map point, projected to the optical center of the camera device, and imaged on the imaging plane to obtain a road image.
  • the line where the incident light is located can be used as the line where the connecting line between the camera device and the map point is located.
  • the normal vector of the map point is perpendicular to the plane.
  • the angle between the incident light and the normal vector is different. It can be seen from FIG. 2 that the map point corresponding to the angle 1 can be collected by the camera device, and the map point corresponding to the angle 2 cannot be collected by the camera device. Therefore, a reasonable setting of the preset angle range can filter out the map points that can be collected by the camera device from a large number of map points to be selected, and obtain the target map point.
  • the map points to be selected are filtered according to the normal vector of the map points to be selected, and the map points that cannot be observed by the camera are filtered out, thereby improving the accuracy of the target map points.
  • each map point in the preset map is constructed using the following steps 1c to 4c.
  • Step 1c Obtain a sample road image, and extract a sample edge feature map of the sample road image according to the preset edge strength.
  • step S140 For extracting the edge feature map of the sample in this step, refer to step S140.
  • Step 2c Determine the sample positioning pose corresponding to the sample road image according to the data collected by the motion detection device.
  • the sample positioning pose is the pose in the world coordinate system.
  • the sample positioning pose can be considered as the positioning pose of the vehicle at the current moment, and the position of the map point in the preset map can be constructed based on the sample positioning pose.
  • Step 3c Determine the position information of each point in the world coordinate system in the sample edge feature map based on the three-dimensional reconstruction algorithm and the aforementioned sample positioning pose.
  • this step when determining the position information of each point in the sample edge feature map in the world coordinate system, it may include:
  • the three-dimensional reconstruction algorithm determine the three-dimensional coordinates of each point in the sample edge feature map in the camera coordinate system, determine the conversion matrix between the camera coordinate system and the world coordinate system according to the sample positioning pose, and calculate the above three-dimensional coordinates according to the conversion matrix. After conversion, the position information of each point in the sample edge feature map in the world coordinate system is obtained.
  • Step 4c Select a map point from each point of the sample edge feature map according to the preset point density, and add the position information of each map point in the world coordinate system to the preset map.
  • This step may specifically include: constructing an octree cube grid in a preset map according to an octree algorithm with a preset voxel size; for each octree cube grid, starting from the octree cube grid Select a point from the points in the sample edge feature map as the map point corresponding to the octree cube grid.
  • the map points selected from each point of the sample edge feature map according to the preset point density form point cloud data in a three-dimensional world coordinate system.
  • a structured sample edge feature map is extracted from the sample road image, and point cloud data is extracted from the sample edge feature map for map construction, and a denser map can be obtained.
  • Information increase the amount of effective information in the preset map.
  • FIG. 3 is a schematic diagram of a framework of a vision-based vehicle positioning method provided by an embodiment of the present invention.
  • the framework includes a front end and a relocation end.
  • the front-end motion detection equipment may include sensors commonly used in smart vehicles such as IMU or wheel speedometer.
  • the function of the front end is to estimate the initial positioning pose of the vehicle at the current moment according to the previous positioning pose of the vehicle and various sensor data, and input the initial positioning pose into the map manager and the MVS positioning optimizer.
  • the initial positioning pose can be understood as the predicted global pose.
  • the front end estimates the initial positioning pose of the vehicle, it can also be based on road images.
  • the relocation terminal includes map manager, edge detector and MVS location optimizer.
  • the map manager loads the preset map of the environment, and the map is managed by the octree.
  • the map manager can query the map points that may be observed by the camera from the preset map according to the initial positioning pose.
  • the map manager can also filter the queried map points according to the collection range of the camera device, and remove the map points beyond the range.
  • the map manager inputs the filtered target map points into the MVS positioning optimizer.
  • the edge detector takes the road image as input, outputs the edge response of the road image, that is, the edge feature map, and inputs the value of the edge feature map to the MVS positioning optimizer.
  • the MVS positioning optimizer After receiving the input initial positioning pose, target map point, and edge feature map, the MVS positioning optimizer adopts an iterative optimization solution method to determine the vehicle's performance based on the mapping difference between the target map point and the point in the edge feature map.
  • Global pose The global pose of the vehicle is the vehicle positioning pose mentioned above.
  • FIG. 4 is a schematic structural diagram of a vision-based vehicle positioning device provided by an embodiment of the present invention. This device embodiment corresponds to the embodiment shown in FIG. 1, and the device is applied to electronic equipment.
  • the device includes:
  • the road image acquisition module 410 is configured to acquire the road image collected by the camera device
  • the initial pose determination module 420 is configured to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
  • the map point determination module 430 is configured to determine the target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: pre-compare the edge features of the sample road image The points in the figure are obtained after three-dimensional reconstruction and selection;
  • the edge feature extraction module 440 is configured to extract the edge feature map of the road image according to the preset edge strength
  • the vehicle pose determination module 450 is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
  • the initial pose determination module 420 is specifically configured as follows:
  • mapping difference When the mapping difference is greater than the preset difference threshold, modify the value of the estimated pose according to the mapping difference, and return to perform the operation of mapping the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose ;
  • the vehicle positioning pose is determined according to the current value of the estimated pose.
  • the initial pose determination module 420 maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose ,
  • the mapping difference between the target map point mapped to the same coordinate system and the point in the edge feature map including:
  • the conversion matrix between the world coordinate system and the camera coordinate system determines the conversion matrix between the world coordinate system and the camera coordinate system, and map the target map point to the image coordinate system according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system ,
  • the first mapping position of the target map point is obtained, and the projection difference between the first mapping position and the position of the point in the edge feature map in the image coordinate system is calculated;
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located,
  • the image coordinate system is the coordinate system where the road image is located;
  • the estimated pose value determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, map the points in the edge feature map to the world In the coordinate system, the second mapping position of the point in the edge feature map is obtained, and the projection difference between the second mapping position and the position of the target map point in the world coordinate system is calculated.
  • the map point determination module 430 is specifically configured as:
  • the map points within the collection range of the camera device are filtered from each candidate map point to obtain the target map point corresponding to the road image.
  • the map point determination module 430 selects the map points within the collection range of the camera device from the map points to be selected to obtain the target corresponding to the road image
  • the map points include:
  • the initial positioning pose determine the conversion matrix between the world coordinate system and the camera coordinate system; where the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
  • mapping each candidate map point to the camera coordinate system to obtain the third mapping position of each candidate map point
  • a target map point corresponding to the road image is obtained by screening from each candidate map point.
  • the determined map point to be selected includes the coordinate position and normal vector of the map point to be selected; the map point determination module 430 filters each map point to be selected When the map points within the collection range of the camera device are obtained, the target map points corresponding to the road image include:
  • the target map point corresponding to the road image is filtered from each candidate map point.
  • the device further includes: a map point construction module (not shown in the figure), configured to use the following operations to construct each map point in the preset map :
  • the sample positioning pose corresponding to the sample road image; where the sample positioning pose is the pose in the world coordinate system;
  • the foregoing device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment.
  • the device embodiment is obtained based on the method embodiment, and the specific description can be found in the method embodiment part, which is not repeated here.
  • Fig. 5 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention.
  • the vehicle-mounted terminal includes: a processor 510, a camera device 520, and a motion detection device 530; the processor 510 includes:
  • Road image acquisition module (not shown in the figure), used to acquire road images collected by the camera device 520;
  • the initial pose determination module (not shown in the figure) is used to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device 530; where the initial positioning pose is the world coordinate system where the preset map is located Pose in
  • the map point determination module (not shown in the figure) is used to determine the target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: The points in the edge feature map of the road image are obtained after three-dimensional reconstruction and selection;
  • the edge feature extraction module (not shown in the figure) is used to extract the edge feature map of the road image according to the preset edge strength
  • the vehicle pose determination module (not shown in the figure) is used to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
  • the initial pose determination module is specifically used for:
  • mapping difference When the mapping difference is greater than the preset difference threshold, modify the value of the estimated pose according to the mapping difference, and return to perform the operation of mapping the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose ;
  • the vehicle positioning pose is determined according to the current value of the estimated pose.
  • the initial pose determination module maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose, When determining the mapping difference between the target map point mapped to the same coordinate system and the point in the edge feature map, it includes:
  • the conversion matrix between the world coordinate system and the camera coordinate system determines the conversion matrix between the world coordinate system and the camera coordinate system, and map the target map point to the image coordinate system according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system ,
  • the first mapping position of the target map point is obtained, and the projection difference between the first mapping position and the position of the point in the edge feature map in the image coordinate system is calculated;
  • the camera coordinate system is the three-dimensional coordinate system where the camera device is located,
  • the image coordinate system is the coordinate system where the road image is located;
  • the estimated pose value determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, map the points in the edge feature map to the world In the coordinate system, the second mapping position of the point in the edge feature map is obtained, and the projection difference between the second mapping position and the position of the target map point in the world coordinate system is calculated.
  • the map point determination module is specifically used for:
  • the map points within the collection range of the camera device 520 are filtered from each candidate map point to obtain the target map point corresponding to the road image.
  • the map point determination module selects the map points within the collection range of the camera device 520 from the map points to be selected to obtain the target corresponding to the road image
  • the map points include:
  • the initial positioning pose determine the conversion matrix between the world coordinate system and the camera coordinate system; where the camera coordinate system is the three-dimensional coordinate system where the camera device 520 is located;
  • mapping each candidate map point to the camera coordinate system to obtain the third mapping position of each candidate map point
  • the target map point corresponding to the road image is filtered from each candidate map point.
  • the determined map point to be selected includes the coordinate position and normal vector of the map point to be selected; the map point determination module filters the map points to be When the map points within the collection range of the camera device 520 obtain the target map point corresponding to the road image, it includes:
  • the target map point corresponding to the road image is filtered from each candidate map point.
  • the processor 510 further includes: a map point construction module, configured to construct each map point in the preset map by using the following operations:
  • the sample positioning pose corresponding to the sample road image; where the sample positioning pose is the pose in the world coordinate system;
  • This embodiment of the terminal and the embodiment of the method shown in FIG. 1 are embodiments based on the same inventive concept, and relevant points may be referred to each other.
  • the foregoing terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment. For specific description, refer to the method embodiment.
  • modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes.
  • the modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.

Abstract

Disclosed are a vehicle positioning method and device employing visual sensing, and a vehicle-mounted terminal. The method comprises: acquiring a road image captured by a camera apparatus; determining, according to data acquired by a motion detection apparatus, an initial positioning orientation corresponding to the road image; determining, from a preconfigured map and according to the initial positioning orientation, a target map point corresponding to the road image; extracting an edge feature image of the road image according to preconfigured edge intensity; and determining, according to the initial positioning orientation, a mapping difference between the target map point and a point in the edge feature image, and determining a vehicle positioning orientation according to a projection difference, wherein the initial positioning orientation is an orientation in a world coordinate system in which the preconfigured map is located, and each map point in the preconfigured map is acquired by performing three-dimensional reconstruction on a point in an edge feature image of a sample road image in advance and by performing selection. The solution provided by the embodiments of the invention improves the success rate of vehicle positioning performed on the basis of visual sensing.

Description

一种基于视觉的车辆定位方法、装置及车载终端Vision-based vehicle positioning method, device and vehicle-mounted terminal 技术领域Technical field
本发明涉及智能驾驶技术领域,具体而言,涉及一种基于视觉的车辆定位方法、装置及车载终端。The present invention relates to the technical field of intelligent driving, in particular to a vision-based vehicle positioning method, device and vehicle-mounted terminal.
背景技术Background technique
在智能驾驶技术领域中,对车辆进行定位是智能驾驶中的重要环节。通常,当车辆行驶时,可以根据卫星定位系统确定车辆位姿。但是,当车辆行驶至卫星信号较弱或无信号的场景中时,为了精确地确定车辆的定位位姿,可以基于视觉定位的方式进行定位。In the field of intelligent driving technology, positioning the vehicle is an important part of intelligent driving. Generally, when the vehicle is running, the vehicle's pose can be determined according to the satellite positioning system. However, when the vehicle is driving into a scene with weak or no satellite signal, in order to accurately determine the positioning pose of the vehicle, the positioning can be performed based on the visual positioning.
基于视觉的定位,通常是根据相机设备采集的道路图像的语义信息与高精度地图中的语义信息之间的匹配而进行的定位。高精度地图中的语义信息是根据道路上常见的标志物进行建模得到。标志物一般可以包括地面的车道线、地面标志线、交通标识牌和路灯杆等。Vision-based positioning is usually based on the matching between the semantic information of the road image collected by the camera device and the semantic information in the high-precision map. The semantic information in the high-precision map is modeled based on common landmarks on the road. Markers generally include lane lines on the ground, ground marking lines, traffic signs, and street light poles.
当场景中的有效标志物足够多时,上述基于视觉的定位方式能够有效地确定车辆定位位姿。但是当场景中的标志物稀少甚至没有标志物时,高精度地图难以给出足够的有效信息用于视觉定位;或者,标志物由于被遮挡或者老化而无法完全地匹配高精度地图时,视觉定位可能无法进行。以上这些都导致视觉定位的有效性较低。When there are enough effective markers in the scene, the above-mentioned vision-based positioning method can effectively determine the vehicle positioning pose. However, when the markers in the scene are scarce or even there are no markers, it is difficult for the high-precision map to give enough effective information for visual positioning; or when the markers cannot be completely matched with the high-precision map due to occlusion or aging, visual positioning It may not be possible. All of the above results in low effectiveness of visual positioning.
发明内容Summary of the invention
本发明提供了一种基于视觉的车辆定位方法、装置及车载终端,以提高基于视觉对车辆进行定位时的有效性。具体的技术方案如下。The invention provides a vision-based vehicle positioning method, device and vehicle-mounted terminal to improve the effectiveness of vehicle positioning based on vision. The specific technical solution is as follows.
第一方面,本发明实施例公开了一种基于视觉的车辆定位方法,包括:In the first aspect, an embodiment of the present invention discloses a vision-based vehicle positioning method, including:
获取相机设备采集的道路图像;Obtain road images collected by camera equipment;
根据运动检测设备采集的数据,确定与所述道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;Determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
根据所述初始定位位姿,从所述预设地图中确定与所述道路图像对应的目标地图点;其中,所述预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到;According to the initial positioning pose, a target map point corresponding to the road image is determined from the preset map; wherein, each map point in the preset map is: an edge feature map of the sample road image in advance The points in are obtained after 3D reconstruction and selection;
根据预设的边缘强度,提取所述道路图像的边缘特征图;Extracting the edge feature map of the road image according to the preset edge strength;
根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述投影差异确定车辆定位位姿。According to the initial positioning pose, the mapping difference between the target map point and the point in the edge feature map is determined, and the vehicle positioning pose is determined according to the projection difference.
可选的,所述根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述映射差异确定车辆定位位姿的步骤,包括:Optionally, the step of determining the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determining the vehicle positioning pose according to the mapping difference includes:
以所述初始定位位姿为估计位姿的初始值,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异;Taking the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to The mapping difference between the target map point and the point in the edge feature map in the same coordinate system;
当所述映射差异大于预设差异阈值时,根据所述映射差异修改所述估计位姿的取值,返回执行所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中的步骤;When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the Steps of mapping points in the edge feature map to the same coordinate system;
当所述映射差异小于预设差异阈值时,根据所述估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
可选的,所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异的步骤,包括:Optionally, the target map point and the point in the edge feature map are mapped to the same coordinate system according to the value of the estimated pose, and the target map mapped to the same coordinate system is determined The step of mapping the difference between the point and the point in the edge feature map includes:
根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵,根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述目标地图点映射至所述图像坐标系中, 得到所述目标地图点的第一映射位置,计算所述第一映射位置与所述边缘特征图中的点在所述图像坐标系中的位置之间的投影差异;其中,所述相机坐标系为所述相机设备所在的三维坐标系,所述图像坐标系为所述道路图像所在的坐标系;According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system. According to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
或者,or,
根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵;根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述边缘特征图中的点映射至所述世界坐标系中,得到所述边缘特征图中的点的第二映射位置,计算所述第二映射位置与所述目标地图点在所述世界坐标系中的位置之间的投影差异。According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
可选的,所述根据所述初始定位位姿,从所述预设地图中确定与所述道路图像对应的目标地图点的步骤,包括:Optionally, the step of determining a target map point corresponding to the road image from the preset map according to the initial positioning pose includes:
以所述初始定位位姿在所述预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图点作为待选地图点;Taking the position of the initial positioning pose in the preset map as a center, and a map point contained in a sphere determined by using a preset distance as a radius as a candidate map point;
从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点。The map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
可选的,所述从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点的步骤,包括:Optionally, the step of screening map points within the collection range of the camera device from each candidate map point to obtain a target map point corresponding to the road image includes:
根据所述初始定位位姿,确定所述世界坐标系与相机坐标系之间的转换矩阵;其中,所述相机坐标系为所述相机设备所在的三维坐标系;Determine the conversion matrix between the world coordinate system and the camera coordinate system according to the initial positioning pose; wherein, the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
根据所述转换矩阵,将各个待选地图点映射至所述相机坐标系中,得到各个待选地图点的第三映射位置;Mapping each candidate map point to the camera coordinate system according to the conversion matrix to obtain a third mapping position of each candidate map point;
根据所述第三映射位置处于所述相机设备在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the filtering condition that the third mapping position is within the collection range of the camera device in the vertical height direction, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
可选的,确定的待选地图点包括待选地图点的坐标位置和法向量;所述从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点的步骤,包括:Optionally, the determined map points to be selected include the coordinate positions and normal vectors of the map points to be selected; the map points within the collection range of the camera device are selected from each map point to be selected, and the corresponding map points to the road are obtained. The steps of the target map point corresponding to the image include:
根据各个待选地图点的坐标位置,确定所述相机设备与各个待选地图点之间的连线;Determine the connection line between the camera device and each map point to be selected according to the coordinate position of each map point to be selected;
计算各个连线与对应的待选地图点的法向量之间的夹角;Calculate the angle between each line and the normal vector of the corresponding map point to be selected;
根据所述夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the screening condition that the included angle is within the preset included angle range, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
可选的,所述预设地图中的各个地图点采用以下方式构建:Optionally, each map point in the preset map is constructed in the following manner:
获取样本道路图像,并根据预设的边缘强度,提取所述样本道路图像的样本边缘特征图;Acquiring a sample road image, and extracting a sample edge feature map of the sample road image according to a preset edge strength;
根据运动检测设备采集的数据,确定与所述样本道路图像对应的样本定位位姿;其中,所述样本定位位姿为所述世界坐标系中的位姿;Determine the sample positioning pose corresponding to the sample road image according to the data collected by the motion detection device; wherein the sample positioning pose is the pose in the world coordinate system;
基于三维重建算法和所述样本定位位姿,确定所述样本边缘特征图中每个点在所述世界坐标系中的位置信息;Determining the position information of each point in the sample edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the sample positioning pose;
按照预设点密度从所述样本边缘特征图的各个点中选择地图点,将各个地图点在所述世界坐标系中的位置信息添加至所述预设地图。A map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
第二方面,本发明实施例公开了一种基于视觉的车辆定位装置,包括:In the second aspect, an embodiment of the present invention discloses a vision-based vehicle positioning device, including:
道路图像获取模块,被配置为获取相机设备采集的道路图像;The road image acquisition module is configured to acquire the road image collected by the camera device;
初始位姿确定模块,被配置为根据运动检测设备采集的数据,确定与所述道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;The initial pose determination module is configured to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
地图点确定模块,被配置为根据所述初始定位位姿,从所述预设地图中确定与所述道路图像对应的目标地图点;其中,所述预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进 行三维重建并选择后得到;The map point determination module is configured to determine a target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: Perform three-dimensional reconstruction and selection of points in the edge feature map of the sample road image;
边缘特征提取模块,被配置为根据预设的边缘强度,提取所述道路图像的边缘特征图;An edge feature extraction module configured to extract the edge feature map of the road image according to a preset edge strength;
车辆位姿确定模块,被配置为根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述投影差异确定车辆定位位姿。The vehicle pose determination module is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
可选的,所述初始位姿确定模块,具体被配置为:Optionally, the initial pose determination module is specifically configured as:
以所述初始定位位姿为估计位姿的初始值,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异;Taking the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to The mapping difference between the target map point and the point in the edge feature map in the same coordinate system;
当所述映射差异大于预设差异阈值时,根据所述映射差异修改所述估计位姿的取值,返回执行所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中的操作;When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the The operation of mapping points in the edge feature map to the same coordinate system;
当所述映射差异小于预设差异阈值时,根据所述估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
可选的,所述初始位姿确定模块,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异时,包括:Optionally, the initial pose determination module maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose, and determines the mapping to the same coordinate system When the mapping difference between the target map point in and the point in the edge feature map includes:
根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵,根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述目标地图点映射至所述图像坐标系中,得到所述目标地图点的第一映射位置,计算所述第一映射位置与所述边缘特征图中的点在所述图像坐标系中的位置之间的投影差异;其中,所述相机坐标系为所述相机设备所在的三维坐标系,所述图像坐标系为所述道路图像所在的坐标系;According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system. According to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
或者,or,
根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵;根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述边缘特征图中的点映射至所述世界坐标系中,得到所述边缘特征图中的点的第二映射位置,计算所述第二映射位置与所述目标地图点在所述世界坐标系中的位置之间的投影差异。According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
可选的,所述地图点确定模块,具体被配置为:Optionally, the map point determination module is specifically configured as:
以所述初始定位位姿在所述预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图点作为待选地图点;Taking the position of the initial positioning pose in the preset map as a center, and a map point contained in a sphere determined by using a preset distance as a radius as a candidate map point;
从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点。The map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
可选的,所述地图点确定模块,从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点时,包括:Optionally, when the map point determination module selects map points within the collection range of the camera device from each map point to be selected, and obtains the target map point corresponding to the road image, it includes:
根据所述初始定位位姿,确定所述世界坐标系与相机坐标系之间的转换矩阵;其中,所述相机坐标系为所述相机设备所在的三维坐标系;Determine the conversion matrix between the world coordinate system and the camera coordinate system according to the initial positioning pose; wherein, the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
根据所述转换矩阵,将各个待选地图点映射至所述相机坐标系中,得到各个待选地图点的第三映射位置;Mapping each candidate map point to the camera coordinate system according to the conversion matrix to obtain a third mapping position of each candidate map point;
根据所述第三映射位置处于所述相机设备在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the filtering condition that the third mapping position is within the collection range of the camera device in the vertical height direction, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
可选的,确定的待选地图点包括待选地图点的坐标位置和法向量;所述地图点确定模块,从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点时,包括:Optionally, the determined to-be-selected map points include the coordinate positions and normal vectors of the to-be-selected map points; the map point determination module selects map points within the collection range of the camera device from each of the to-be-selected map points, When obtaining the target map point corresponding to the road image, it includes:
根据各个待选地图点的坐标位置,确定所述相机设备与各个待选地图点之间的连线;Determine the connection line between the camera device and each map point to be selected according to the coordinate position of each map point to be selected;
计算各个连线与对应的待选地图点的法向量之间的夹角;Calculate the angle between each line and the normal vector of the corresponding map point to be selected;
根据所述夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对 应的目标地图点。According to the screening condition that the included angle is within the preset included angle range, the target map point corresponding to the road image is obtained by filtering from each candidate map point.
可选的,所述装置还包括:地图点构建模块,被配置为采用以下操作构建所述预设地图中的各个地图点:Optionally, the device further includes: a map point construction module configured to construct each map point in the preset map by using the following operations:
获取样本道路图像,并根据预设的边缘强度,提取所述样本道路图像的样本边缘特征图;Acquiring a sample road image, and extracting a sample edge feature map of the sample road image according to a preset edge strength;
根据运动检测设备采集的数据,确定与所述样本道路图像对应的样本定位位姿;其中,所述样本定位位姿为所述世界坐标系中的位姿;Determine the sample positioning pose corresponding to the sample road image according to the data collected by the motion detection device; wherein the sample positioning pose is the pose in the world coordinate system;
基于三维重建算法和所述样本定位位姿,确定所述样本边缘特征图中每个点在所述世界坐标系中的位置信息;Determining the position information of each point in the sample edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the sample positioning pose;
按照预设点密度从所述样本边缘特征图的各个点中选择地图点,将各个地图点在所述世界坐标系中的位置信息添加至所述预设地图。A map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
第三方面,本发明实施例公开了一种车载终端,包括:处理器、相机设备和运动检测设备;所述处理器包括:In a third aspect, an embodiment of the present invention discloses a vehicle-mounted terminal, including: a processor, a camera device, and a motion detection device; the processor includes:
道路图像获取模块,用于获取相机设备采集的道路图像;Road image acquisition module for acquiring road images collected by camera equipment;
初始位姿确定模块,用于根据运动检测设备采集的数据,确定与所述道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;The initial pose determination module is used to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
地图点确定模块,用于根据所述初始定位位姿,从预设地图中确定与所述道路图像对应的目标地图点;其中,所述预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到;The map point determination module is configured to determine a target map point corresponding to the road image from a preset map according to the initial positioning pose; wherein, each map point in the preset map is: pre-complying sample roads The points in the edge feature map of the image are obtained after three-dimensional reconstruction and selection;
边缘特征提取模块,用于根据预设的边缘强度,提取所述道路图像的边缘特征图;The edge feature extraction module is used to extract the edge feature map of the road image according to the preset edge strength;
车辆位姿确定模块,用于根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述投影差异确定车辆定位位姿。The vehicle pose determination module is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
可选的,所述初始位姿确定模块,具体用于:Optionally, the initial pose determination module is specifically used for:
以所述初始定位位姿为估计位姿的初始值,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异;Taking the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to The mapping difference between the target map point and the point in the edge feature map in the same coordinate system;
当所述映射差异大于预设差异阈值时,根据所述映射差异修改所述估计位姿的取值,返回执行所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中的操作;When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the The operation of mapping points in the edge feature map to the same coordinate system;
当所述映射差异小于预设差异阈值时,根据所述估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
可选的,所述初始位姿确定模块,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异时,包括:Optionally, the initial pose determination module maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose, and determines the mapping to the same coordinate system When the mapping difference between the target map point in and the point in the edge feature map includes:
根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵,根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述目标地图点映射至所述图像坐标系中,得到所述目标地图点的第一映射位置,计算所述第一映射位置与所述边缘特征图中的点在所述图像坐标系中的位置之间的投影差异;其中,所述相机坐标系为所述相机设备所在的三维坐标系,所述图像坐标系为所述道路图像所在的坐标系;According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system. According to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
或者,or,
根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵;根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述边缘特征图中的点映射至所述世界坐标系中,得到所述边缘特征图中的点的第二映射位置,计算所述第二映射位置与所述目标地图点在所述世界坐标系中的位置之间的投影差异。According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
可选的,所述地图点确定模块,具体用于:Optionally, the map point determination module is specifically used for:
以所述初始定位位姿在预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图 点作为待选地图点;Taking the position of the initial positioning pose on the preset map as the center, and the map point contained in the sphere determined by using the preset distance as the radius as the candidate map point;
从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点。The map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
可选的,所述地图点确定模块,从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点时,包括:Optionally, when the map point determination module selects map points within the collection range of the camera device from each to-be-selected map point to obtain the target map point corresponding to the road image, it includes:
根据所述初始定位位姿,确定所述世界坐标系与相机坐标系之间的转换矩阵;其中,所述相机坐标系为所述相机设备所在的三维坐标系;Determine the conversion matrix between the world coordinate system and the camera coordinate system according to the initial positioning pose; wherein, the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
根据所述转换矩阵,将各个待选地图点映射至所述相机坐标系中,得到各个待选地图点的第三映射位置;Mapping each candidate map point to the camera coordinate system according to the conversion matrix to obtain a third mapping position of each candidate map point;
根据所述第三映射位置处于所述相机设备在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the filtering condition that the third mapping position is within the collection range of the camera device in the vertical height direction, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
可选的,确定的待选地图点包括待选地图点的坐标位置和法向量;所述地图点确定模块,从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点时,包括:Optionally, the determined to-be-selected map points include the coordinate positions and normal vectors of the to-be-selected map points; the map point determination module selects map points within the collection range of the camera device from each of the to-be-selected map points, When obtaining the target map point corresponding to the road image, it includes:
根据各个待选地图点的坐标位置,确定所述相机设备与各个待选地图点之间的连线;Determine the connection line between the camera device and each map point to be selected according to the coordinate position of each map point to be selected;
计算各个连线与对应的待选地图点的法向量之间的夹角;Calculate the angle between each line and the normal vector of the corresponding map point to be selected;
根据所述夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the screening condition that the included angle is within the preset included angle range, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
可选的,所述处理器还包括:地图点构建模块,用于采用以下操作构建所述预设地图中的各个地图点:Optionally, the processor further includes: a map point construction module, configured to construct each map point in the preset map by using the following operations:
获取样本道路图像,并根据预设的边缘强度,提取所述样本道路图像的样本边缘特征图;Acquiring a sample road image, and extracting a sample edge feature map of the sample road image according to a preset edge strength;
根据运动检测设备采集的数据,确定与所述样本道路图像对应的样本定位位姿;其中,所述样本定位位姿为所述世界坐标系中的位姿;Determine the sample positioning pose corresponding to the sample road image according to the data collected by the motion detection device; wherein the sample positioning pose is the pose in the world coordinate system;
基于三维重建算法和所述样本定位位姿,确定所述样本边缘特征图中每个点在所述世界坐标系中的位置信息;Determining the position information of each point in the sample edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the sample positioning pose;
按照预设点密度从所述样本边缘特征图的各个点中选择地图点,将各个地图点在所述世界坐标系中的位置信息添加至所述预设地图。A map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
由上述内容可知,本发明实施例提供的基于视觉的车辆定位方法、装置及车载终端,可以根据运动检测设备采集的数据,确定与道路图像对应的初始定位位姿,根据该初始定位位姿从预设地图中确定与道路图像对应的目标地图点,根据初始定位位姿以及目标地图点与道路图像的边缘特征图中的点之间的映射差异,确定车辆定位位姿,预设地图中的地图点是预先对样本道路图像的边缘特征图中的点进行的三维重建并选择得到。由于道路图像的边缘特征图能够提取图像中的结构化特征,这些结构化特征会更丰富且抗噪,即便场景中的标志物稀疏或者标志物被遮挡,也能通过目标地图点与边缘特征图中的点之间的匹配确定车辆定位位姿。因此,本发明实施例能够提高基于视觉对车辆进行定位时的有效性。当然,实施本发明的任一产品或方法并不一定需要同时达到以上所述的所有优点。It can be seen from the foregoing that the vision-based vehicle positioning method, device, and vehicle-mounted terminal provided by the embodiments of the present invention can determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device, and then according to the initial positioning pose from The target map point corresponding to the road image is determined in the preset map, and the vehicle positioning pose is determined according to the initial positioning pose and the mapping difference between the target map point and the points in the edge feature map of the road image. The map points are obtained by three-dimensional reconstruction and selection of points in the edge feature map of the sample road image in advance. Because the edge feature map of the road image can extract the structured features in the image, these structured features will be richer and anti-noise. Even if the markers in the scene are sparse or the markers are occluded, they can pass through the target map points and edge feature maps. The match between the points in determines the vehicle positioning pose. Therefore, the embodiments of the present invention can improve the effectiveness of vehicle positioning based on vision. Of course, implementing any product or method of the present invention does not necessarily need to achieve all the advantages described above at the same time.
本发明实施例的创新点包括:The innovative points of the embodiments of the present invention include:
1、边缘特征图为道路图像中的结构化特征,其丰富且抗噪,不易受到场景中车道线、交通标识牌、路灯杆等标志物多少的影响,也不易受早晚、天气等变化时的光照影响,鲁棒性更好,因此根据边缘特征图中的点与预设地图中的地图点进行匹配进行车辆定位,能够提高车辆进行定位时的有效性。1. The edge feature map is the structural feature in the road image, which is rich and anti-noise, and is not easily affected by the number of lane lines, traffic signs, street light poles and other landmarks in the scene, and it is not easily affected by changes in the morning, evening, and weather. It is more robust due to the influence of illumination. Therefore, matching the points in the edge feature map with the map points in the preset map for vehicle positioning can improve the effectiveness of vehicle positioning.
2、在确定车辆定位位姿时,通过不断地调整估计位姿的取值,来得到目标地图点与边缘特征图中的点之间的映射差异,使得估计位姿的取值逐渐地接近真实值,从而迭代地求解得到车辆定位位姿,使得求解的车辆定位位姿更准确。2. When determining the vehicle positioning pose, continuously adjust the value of the estimated pose to obtain the mapping difference between the target map point and the point in the edge feature map, so that the value of the estimated pose gradually approaches the real Value, iteratively solve the vehicle positioning pose, making the solved vehicle positioning pose more accurate.
3、针对以初始定位位姿在预设地图中的位置为中心的球内的地图点,将处于相机设备的采集范围 内的地图点,筛选为目标地图点,能够从预设地图中选择有效的地图点,提高确定的车辆定位位姿的准确性。3. For the map points in the ball centered on the position of the initial positioning pose in the preset map, the map points within the collection range of the camera device are filtered as the target map points, which can be selected from the preset map. The map points can improve the accuracy of the determined vehicle positioning pose.
4、在构建预设地图时,从样本道路图像中提取结构化的样本边缘特征图,从样本边缘特征图中提取点云数据用于地图构建,能够得到更加稠密的地图信息,增加预设地图中的有效信息量。4. When constructing the preset map, extract the structured sample edge feature map from the sample road image, and extract the point cloud data from the sample edge feature map for map construction, which can get more dense map information and increase the preset map The amount of effective information in.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained from these drawings without creative work.
图1为本发明实施例提供的基于视觉的车辆定位方法的一种结构示意图;FIG. 1 is a schematic structural diagram of a vision-based vehicle positioning method provided by an embodiment of the present invention;
图2为本发明实施例提供的相机设备与地图点的法向量之间夹角的一种示意图;2 is a schematic diagram of the angle between the camera device and the normal vector of the map point provided by an embodiment of the present invention;
图3为本发明实施例提供的基于视觉的车辆定位方法的一种架构示意图;3 is a schematic diagram of an architecture of a vision-based vehicle positioning method provided by an embodiment of the present invention;
图4为本发明实施例提供的基于视觉的车辆定位装置的一种结构示意图;4 is a schematic structural diagram of a vision-based vehicle positioning device provided by an embodiment of the present invention;
图5为本发明实施例提供的车载终端的一种结构示意图。Fig. 5 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含的一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。It should be noted that the terms "including" and "having" in the embodiments of the present invention and the drawings and any variations thereof are intended to cover non-exclusive inclusions. For example, the process, method, system, product or device that contains a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally also includes Other steps or units inherent to these processes, methods, products or equipment.
本发明实施例公开了一种基于视觉的车辆定位方法及处理器、车载终端,能够提高基于视觉对车辆进行定位时的有效性。下面对本发明实施例进行详细说明。The embodiment of the invention discloses a vision-based vehicle positioning method, a processor, and a vehicle-mounted terminal, which can improve the effectiveness of vehicle positioning based on vision. The embodiments of the present invention will be described in detail below.
图1为本发明实施例提供的基于视觉的车辆定位方法的一种流程示意图。该方法应用于电子设备。该电子设备可以为普通计算机、服务器或者智能终端设备等,也可以为车载电脑或车载工业控制计算机(Industrial personal Computer,IPC)等车载终端。本实施例中,车载终端可以安装于车辆中,车辆是指智能车辆。车辆中设置了多种传感器,包括相机设备和运动检测设备等传感器。车辆中设置的相机设备可以为一个或多个。运动检测设备可以包括惯性测量单元(Inertial Measurement Unit,IMU)和/或轮速计等传感器。该方法具体包括以下步骤。FIG. 1 is a schematic flowchart of a vision-based vehicle positioning method provided by an embodiment of the present invention. This method is applied to electronic equipment. The electronic device may be an ordinary computer, a server, or an intelligent terminal device, etc., and may also be an in-vehicle terminal such as an in-vehicle computer or an in-vehicle industrial control computer (IPC). In this embodiment, the vehicle-mounted terminal may be installed in a vehicle, and the vehicle refers to a smart vehicle. A variety of sensors are installed in the vehicle, including sensors such as camera equipment and motion detection equipment. There may be one or more camera devices provided in the vehicle. The motion detection device may include sensors such as an Inertial Measurement Unit (IMU) and/or a wheel speedometer. The method specifically includes the following steps.
S110:获取相机设备采集的道路图像。S110: Obtain a road image collected by the camera device.
其中,相机设备可以按照预设的频率采集道路图像。道路图像可以包括道路标志物或者相机设备的图像采集范围内的其他任何物体的图像数据。Among them, the camera device can collect road images at a preset frequency. The road image may include road markers or image data of any other objects within the image collection range of the camera device.
本实施例中,当相机设备为多个时,上述道路图像可以为车辆前部设置的一个相机设备采集的,或者为车辆前部设置的多个相机采集的图像拼接后得到。车辆所在的场所可以是户外,也可以为停车场。In this embodiment, when there are multiple camera devices, the above-mentioned road image may be collected by one camera device installed in the front of the vehicle, or obtained by stitching images collected by multiple cameras installed in the front of the vehicle. The place where the vehicle is located can be outdoors or a parking lot.
道路图像可以是车辆行驶在各种道路上时相机设备采集的车辆周围环境的图像。其中,道路可以是任何车辆能够行驶的地方,例如城市道路、乡村道路、山区道路、停车场道路等,驶入停车位的过程中采集的图像也可以包含在道路图像中。The road image may be an image of the surrounding environment of the vehicle collected by the camera device when the vehicle is driving on various roads. Wherein, the road can be any place where vehicles can travel, such as urban roads, rural roads, mountain roads, parking lot roads, etc., and the images collected during the process of entering the parking space can also be included in the road image.
S120:根据运动检测设备采集的数据,确定与道路图像对应的初始定位位姿。S120: Determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device.
其中,初始定位位姿为预设地图所在的世界坐标系中的位姿。预设地图可以为车辆中安装的高精度地图。运动检测设备采集的数据的时刻与相机设备采集的道路图像的时刻,可以为相互关联的时刻,例如两者的采集时刻可以为同一时刻或者时间差很短的时刻。Among them, the initial positioning pose is the pose in the world coordinate system where the preset map is located. The preset map may be a high-precision map installed in the vehicle. The time of the data collected by the motion detection device and the time of the road image collected by the camera device may be related to each other. For example, the collection time of the two may be the same time or a time with a short time difference.
根据运动检测设备采集的数据,确定与道路图像对应的初始定位位姿,具体可以包括:获取上一 定位位姿,根据该上一定位位姿和运动检测设备采集的数据,确定与该道路图像对应的初始定位位姿。其中,上一定位位姿可以为确定的车辆在上一时刻的定位位姿。According to the data collected by the motion detection device, determining the initial positioning pose corresponding to the road image may specifically include: obtaining the previous positioning pose, and determining the relationship with the road image based on the previous positioning pose and the data collected by the motion detection device The corresponding initial positioning pose. Among them, the last positioning pose may be the determined positioning pose of the vehicle at the last moment.
在另一种实施方式中,本步骤还可以包括:根据运动检测设备采集的数据,以及该道路图像中的道路特征与预设地图中道路特征的匹配结果,确定车辆的与该道路图像对应的初始定位位姿。在这种实施方式中,将该道路图像中的道路特征与预设地图中道路特征进行匹配,对应的是另一种基于视觉的定位方法。In another embodiment, this step may further include: according to the data collected by the motion detection device and the result of matching the road feature in the road image with the road feature in the preset map, determining the vehicle's corresponding to the road image Initial positioning pose. In this embodiment, matching the road feature in the road image with the road feature in the preset map corresponds to another vision-based positioning method.
在另一种实施方式中,运动检测设备还可以包括全球定位系统(Global Positioning System,GPS)。当运动检测设备包括GPS时,可以尽可能消除根据IMU和/或轮速计等定位过程中的累计误差,提高定位位姿的准确性。In another implementation manner, the motion detection device may also include a global positioning system (Global Positioning System, GPS). When the motion detection equipment includes GPS, it is possible to eliminate the accumulated errors in the positioning process according to the IMU and/or wheel speedometer as much as possible, and improve the accuracy of the positioning pose.
本步骤中确定的初始定位位姿,为采集上述道路图像的当前时刻车辆的初始定位位姿,其用于确定当前时刻更准确的车辆定位位姿。The initial positioning pose determined in this step is the initial positioning pose of the vehicle at the current moment when the road image is collected, and it is used to determine a more accurate vehicle positioning pose at the current moment.
S130:根据初始定位位姿,从预设地图中确定与道路图像对应的目标地图点。S130: Determine the target map point corresponding to the road image from the preset map according to the initial positioning pose.
从预设地图中确定与道路图像对应的目标地图点,可以理解为从预设地图中确定与道路图像对应的目标地图点的位置信息。目标地图点,是可能在道路图像中被观测到的地图点。本步骤中,根据初始定位位姿在世界坐标系中的位置,可以将预设地图中该位置附近预设范围内的地图点作为目标地图点。Determining the target map point corresponding to the road image from the preset map can be understood as determining the location information of the target map point corresponding to the road image from the preset map. The target map point is a map point that may be observed in the road image. In this step, according to the position of the initial positioning pose in the world coordinate system, a map point within a preset range near the position in the preset map can be used as a target map point.
其中,预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到。各个地图点与位置信息的对应关系存储在预设地图中,该位置信息为地图点在世界坐标系中的位置信息。Among them, each map point in the preset map is obtained after three-dimensional reconstruction and selection of points in the edge feature map of the sample road image in advance. The corresponding relationship between each map point and location information is stored in a preset map, and the location information is the location information of the map point in the world coordinate system.
每个地图点在世界坐标系中的位置信息包括:每个地图点在世界坐标系中的坐标位置和该地图点的法向量信息。该地图点的法向量信息表示该地图点所在平面的法向量。在世界坐标系中,该地图点的坐标位置可以采用三个坐标(a,b,c)表示,该点的法向量信息可以采用三个参量(A,B,C)表示。每个地图点的位置信息包含了6个维度的信息,这种表示方式可以等价于采用一个平面A(x-a)+B(y-b)+C(z-c)=0来表示每个地图点。The position information of each map point in the world coordinate system includes: the coordinate position of each map point in the world coordinate system and the normal vector information of the map point. The normal vector information of the map point indicates the normal vector of the plane where the map point is located. In the world coordinate system, the coordinate position of the map point can be represented by three coordinates (a, b, c), and the normal vector information of the point can be represented by three parameters (A, B, C). The location information of each map point contains information of 6 dimensions. This representation can be equivalent to using a plane A(x-a)+B(y-b)+C(z-c)=0 to represent each map point.
S140:根据预设的边缘强度,提取道路图像的边缘特征图。S140: Extract the edge feature map of the road image according to the preset edge strength.
图像中的边缘是指周围像素灰度有阶跃变化的像素点的集合。边缘点两侧像素的灰度值有显著的差异。边缘强度,可以理解为边缘点梯度的幅值。边缘特征图可以理解为与道路图像尺寸一致的包含边缘特征的数据。在边缘特征图包含道路图像中的边缘线条的位置信息。边缘线条由边缘点组成。The edge in an image refers to a collection of pixels with a step change in the gray level of surrounding pixels. The gray values of the pixels on both sides of the edge points are significantly different. Edge strength can be understood as the magnitude of the edge point gradient. The edge feature map can be understood as data containing edge features consistent with the size of the road image. The edge feature map contains the position information of the edge lines in the road image. The edge line is composed of edge points.
提取的边缘特征图为在道路图像所在的图像坐标系中表示的位置信息。边缘特征图中的边缘点为图像坐标系中的位置。The extracted edge feature map is the position information expressed in the image coordinate system where the road image is located. The edge points in the edge feature map are the positions in the image coordinate system.
上述提取道路图像的边缘特征图,可以理解为,对道路图像进行特征提取,得到边缘特征图。具体的,在提取道路图像的边缘特征图时,可以将预设的边缘强度作为阈值,从道路图像中提取出边缘强度大于该阈值的边缘,得到边缘特征图。也可以是,在道路图像的局部范围内,提取出边缘强度较大的边缘,例如在有棱线的柱子中,柱子内部的棱线的梯度在整个道路图像范围内可能不是较大的,但是在柱子范围内该棱线的边缘强度较大,因此也作为边缘提取的对象。上述两种方案也可以结合起来。The above extraction of the edge feature map of the road image can be understood as the feature extraction of the road image to obtain the edge feature map. Specifically, when extracting the edge feature map of the road image, the preset edge strength can be used as the threshold, and edges with edge strength greater than the threshold can be extracted from the road image to obtain the edge feature map. It is also possible to extract edges with greater edge strength in the local range of the road image. For example, in a column with a ridge, the gradient of the ridge inside the column may not be large in the entire road image, but The edge strength of the ridge line is relatively high in the range of the column, so it is also the object of edge extraction. The above two schemes can also be combined.
根据预设的边缘强度提取的边缘特征图能够体现道路图像中的结构化特征。具体的,可以采用canny算子、sobel算子和LoG算子提取道路图像的边缘特征图。The edge feature map extracted according to the preset edge strength can reflect the structural features in the road image. Specifically, canny operator, sobel operator and LoG operator can be used to extract the edge feature map of the road image.
在提取边缘特征图之后,还可以对边缘特征图中的值进行高斯模糊(Gaussian Blur)算法处理,能够使得处理后的边缘特征图更光滑。After the edge feature map is extracted, Gaussian Blur algorithm processing can also be performed on the values in the edge feature map, which can make the processed edge feature map smoother.
在另一种实施方式中,本步骤还可以包括:基于边缘特征提取模型,提取道路图像的边缘特征图。In another embodiment, this step may further include: extracting an edge feature map of the road image based on the edge feature extraction model.
其中,边缘特征提取模型为根据样本道路图像以及按照预设的边缘强度标注的边缘特征图训练得到。边缘特征提取模型使得道路图像与对应的边缘特征图相关联。Among them, the edge feature extraction model is obtained by training based on the sample road image and the edge feature map labeled according to the preset edge strength. The edge feature extraction model associates the road image with the corresponding edge feature map.
基于边缘特征提取模型,提取道路图像的边缘特征图,可以包括:将道路图像输入边缘特征提取模型,并获取边缘特征提取模型输出的该道路图像的边缘特征图。Based on the edge feature extraction model, extracting the edge feature map of the road image may include: inputting the road image into the edge feature extraction model, and obtaining the edge feature map of the road image output by the edge feature extraction model.
在训练阶段,可以获取多个样本道路图像和标注的边缘特征,将样本道路图像输入边缘特征提取模型;边缘特征提取模型根据模型参数提取样本道路图像的特征向量,并对特征向量进行回归,得到样本道路图像的参考边缘特征;将参考边缘特征与标注的边缘特征进行比对,得到差异量;当该差异量大于预设差异阈值时,返回执行将样本道路图像输入边缘特征提取模型的步骤;当该差异量不大于预设差异阈值时,确定边缘特征提取模型训练完成。In the training phase, multiple sample road images and labeled edge features can be obtained, and the sample road images are input into the edge feature extraction model; the edge feature extraction model extracts the feature vector of the sample road image according to the model parameters, and regresses the feature vector to obtain The reference edge feature of the sample road image; compare the reference edge feature with the labeled edge feature to obtain the difference amount; when the difference amount is greater than the preset difference threshold, return to the step of inputting the sample road image into the edge feature extraction model; When the amount of difference is not greater than the preset difference threshold, it is determined that the edge feature extraction model training is completed.
其中,标注的边缘特征体现了预设的边缘强度。也就是说,样本道路图像中的边缘特征是按照预设的边缘强度作为标准进行标注的。采用上述机器学习的方法训练得到的边缘特征提取模型,能够提取更准确的边缘特征。边缘特征提取模型也可以称为边缘检测器。Among them, the marked edge feature reflects the preset edge strength. In other words, the edge features in the sample road image are labeled according to the preset edge strength as the standard. The edge feature extraction model trained by the above-mentioned machine learning method can extract more accurate edge features. The edge feature extraction model can also be called an edge detector.
S150:根据初始定位位姿,确定目标地图点与边缘特征图中的点之间的映射差异,根据投影差异确定车辆定位位姿。S150: Determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
初始定位位姿能够在一定精度范围内体现车辆的位姿。为了更精确地确定车辆的位姿,本实施例中根据上述投影差异确定车辆定位位姿。当确定的车辆定位位姿接近于真实值时,目标地图点与边缘特征图中的点之间的映射差异小于预设差异阈值。The initial positioning pose can reflect the pose of the vehicle within a certain accuracy range. In order to determine the pose of the vehicle more accurately, the positioning pose of the vehicle is determined according to the above-mentioned projection difference in this embodiment. When the determined vehicle positioning pose is close to the true value, the mapping difference between the target map point and the point in the edge feature map is less than the preset difference threshold.
由上述内容可知,本实施例可以根据运动检测设备采集的数据,确定与道路图像对应的初始定位位姿,根据该初始定位位姿从预设地图中确定与道路图像对应的目标地图点,根据初始定位位姿以及目标地图点与道路图像的边缘特征图中的点之间的映射差异,确定车辆定位位姿,预设地图中的地图点是预先对样本道路图像的边缘特征图中的点进行的三维重建并选择得到。由于道路图像的边缘特征图能够提取图像中的结构化特征,这些结构化特征会更丰富且抗噪,即便场景中的标志物稀疏或者标志物被遮挡,也能通过目标地图点与边缘特征图中的点之间的匹配确定车辆定位位姿。因此,本实施例能够提高基于视觉对车辆进行定位时的有效性。It can be seen from the above content that this embodiment can determine the initial positioning pose corresponding to the road image based on the data collected by the motion detection device, and determine the target map point corresponding to the road image from the preset map according to the initial positioning pose. The initial positioning pose and the mapping difference between the target map points and the points in the edge feature map of the road image are used to determine the vehicle positioning pose. The map points in the preset map are the points in the edge feature map of the sample road image in advance The three-dimensional reconstruction performed and selected. Because the edge feature map of the road image can extract the structured features in the image, these structured features will be richer and anti-noise. Even if the markers in the scene are sparse or the markers are occluded, they can pass through the target map points and edge feature maps. The match between the points in determines the vehicle positioning pose. Therefore, this embodiment can improve the effectiveness when positioning the vehicle based on vision.
综上,本实施例与传统的基于视觉的车辆定位方法的区别还包括,传统的定位方法检测道路图像中的车道线、人行道、交通标识牌、路灯杆等人造物体对应的语义特征。如果场景中没有这些语义特征,可能无法进行车辆定位。而本实施例中从道路图像中提取的边缘特征更加泛化,包含场景中的人造物体和自然物体的边缘特征,适应性更强,这使得车辆定位过程也更加有效。To sum up, the difference between this embodiment and the traditional vision-based vehicle positioning method also includes that the traditional positioning method detects semantic features corresponding to artificial objects such as lane lines, sidewalks, traffic signs, and street light poles in road images. If these semantic features are not present in the scene, vehicle positioning may not be possible. However, the edge features extracted from the road image in this embodiment are more generalized, including the edge features of man-made objects and natural objects in the scene, and are more adaptable, which makes the vehicle positioning process more effective.
另一方面,由于本实施例提取的是结构化的特征,对于早晚、天气等变化时的光照影响更加鲁棒。从图像中提取特征点的传统方法,对于光照变化的情况不抗噪;传统的经过训练的方法,对于场景不泛化,需要大量的数据训练以保证方案的有效性。本实施例的方案能够用作跨早晚和/或季节的建图及定位方案。On the other hand, because the present embodiment extracts structured features, it is more robust to the influence of illumination when morning and evening, weather, etc. change. The traditional method of extracting feature points from the image is not anti-noise in the case of lighting changes; the traditional trained method does not generalize the scene and requires a lot of data training to ensure the effectiveness of the scheme. The solution of this embodiment can be used as a mapping and positioning solution across morning and evening and/or seasons.
在本发明的另一实施例中,基于图1所示实施例,步骤S150,根据初始定位位姿,确定目标地图点与边缘特征图中的点之间的映射差异,根据所述映射差异确定车辆定位位姿的步骤,包括以下步骤1a~3a。In another embodiment of the present invention, based on the embodiment shown in FIG. 1, in step S150, the mapping difference between the target map point and the point in the edge feature map is determined according to the initial positioning pose, and the mapping difference is determined according to the mapping difference The steps for positioning the vehicle pose include the following steps 1a to 3a.
步骤1a:以初始定位位姿为估计位姿的初始值,根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异。Step 1a: Use the initial positioning pose as the initial value of the estimated pose, and according to the estimated pose value, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to the same coordinate system The mapping difference between the target map point and the point in the edge feature map.
本实施例中,目标地图点位于世界坐标系中,边缘特征图中的点位于图像坐标系中,可以将目标地图点的位置信息映射至图像坐标系,或者将边缘特征图中的点映射至世界坐标系中。上述同一坐标系,可以为图像坐标系、世界坐标系或其他坐标系等。In this embodiment, the target map point is located in the world coordinate system, and the point in the edge feature map is located in the image coordinate system. The location information of the target map point can be mapped to the image coordinate system, or the point in the edge feature map can be mapped to In the world coordinate system. The same coordinate system mentioned above may be an image coordinate system, a world coordinate system, or other coordinate systems.
确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异,可以理解为,确定映射至同一坐标系中的目标地图点的位置信息与边缘特征图中的点的位置信息之间的差异,作为映射差异。上述差异也可以称为残差。Determining the mapping difference between the target map point mapped to the same coordinate system and the point in the edge feature map can be understood as determining the location information of the target map point mapped to the same coordinate system and the point in the edge feature map The difference between the location information is regarded as the mapping difference. The above difference can also be called residual.
例如,可以根据公式e=I(u)-α计算得到上述映射差异。其中,将目标地图点映射至图像坐标系中后得到的像素坐标为u,I(u)表示目标地图点映射至边缘特征图中时的像素坐标u的响应,α可以根据边缘检测器的特性设定,比如设定为255。如果边缘检测器对于各个地图点的位置和相机设备的视角有可统计而规律的观测,则可以根据观测生成函数α=f(p c)来代替α的固定值,p c为地图点在相机坐标系中的位置。 For example, the above-mentioned mapping difference can be calculated according to the formula e=I(u)-α. Among them, the pixel coordinate obtained after mapping the target map point to the image coordinate system is u, I(u) represents the response of the pixel coordinate u when the target map point is mapped to the edge feature map, and α can be based on the characteristics of the edge detector Set, for example, set to 255. If the edge detector makes statistical and regular observations for the position of each map point and the camera device’s viewing angle, it can be based on the observation generation function α=f(p c ) to replace the fixed value of α, where p c is the map point in the camera The position in the coordinate system.
步骤2a:当映射差异大于预设差异阈值时,根据映射差异修改估计位姿的取值,返回执行步骤1a中根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中的步骤。Step 2a: When the mapping difference is greater than the preset difference threshold, modify the estimated pose value according to the mapping difference, and return to perform step 1a according to the estimated pose value to map the target map point and the point in the edge feature map to Steps in the same coordinate system.
当映射差异大于预设差异阈值时,认为估计位姿距离车辆的真实位姿还较远,可以继续修改估计位姿,重复迭代过程。When the mapping difference is greater than the preset difference threshold, it is considered that the estimated pose is still far from the real pose of the vehicle, and the estimated pose can be modified continuously, and the iterative process can be repeated.
根据映射差异修改估计位姿的取值时,具体可以基于雅可比矩阵、海森矩阵、高斯牛顿迭代法或者列文伯格-马夸尔特(Levenberg-Marquardt,LM)算法,根据映射差异修正估计位姿的取值。When modifying the estimated pose value according to the mapping difference, it can be modified based on the Jacobian matrix, Hessian matrix, Gauss-Newton iteration method or Levenberg-Marquardt (LM) algorithm, according to the mapping difference. Estimate the value of the pose.
步骤3a:当映射差异小于预设差异阈值时,根据估计位姿的当前取值,确定车辆定位位姿。Step 3a: When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
当映射差异小于预设差异阈值时,认为估计位姿已经非常接近于车辆的真实位姿。根据估计位姿的当前取值,确定车辆定位位姿,具体可以直接将估计位姿的当前取值确定为车辆定位位姿,也可以对估计位姿的当前取值进行预设变换后确定为车辆定位位姿。When the mapping difference is less than the preset difference threshold, it is considered that the estimated pose is very close to the real pose of the vehicle. According to the current value of the estimated pose, the vehicle positioning pose can be determined. Specifically, the current value of the estimated pose can be directly determined as the vehicle positioning pose, or the current value of the estimated pose can be determined as Vehicle positioning pose.
当映射差异等于预设差异阈值时,可以根据映射差异修改估计位姿的取值,返回执行步骤1a,也可以根据估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is equal to the preset difference threshold, the value of the estimated pose may be modified according to the mapping difference, and step 1a may be returned to, or the vehicle positioning pose may be determined according to the current value of the estimated pose.
综上,本实施例中,在确定车辆定位位姿时,通过不断地调整估计位姿的取值,来得到目标地图点与边缘特征图中的点之间的映射差异,使得估计位姿的取值逐渐地接近真实值,从而迭代地求解得到车辆定位位姿,使得求解的车辆定位位姿更准确。In summary, in this embodiment, when determining the vehicle positioning pose, the value of the estimated pose is continuously adjusted to obtain the mapping difference between the target map point and the point in the edge feature map, so that the estimated pose is The value gradually approaches the true value, so that the vehicle positioning pose is solved iteratively, so that the solved vehicle positioning pose is more accurate.
在本发明的另一实施例中,基于图1所示实施例,步骤1a中根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异的步骤,可以包括以下实施方式。In another embodiment of the present invention, based on the embodiment shown in FIG. 1, in step 1a, according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to The step of mapping differences between the target map points in the same coordinate system and the points in the edge feature map may include the following embodiments.
实施方式一,根据估计位姿的取值,确定世界坐标系与相机坐标系之间的转换矩阵,根据该转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将目标地图点映射至图像坐标系中,得到目标地图点的第一映射位置,计算第一映射位置与边缘特征图中的点在图像坐标系中的位置之间的投影差异。In the first embodiment, the conversion matrix between the world coordinate system and the camera coordinate system is determined according to the value of the estimated pose, and according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, the target map The point is mapped to the image coordinate system to obtain the first mapping position of the target map point, and the projection difference between the first mapping position and the position of the point in the edge feature map in the image coordinate system is calculated.
其中,相机坐标系为相机设备所在的三维坐标系,图像坐标系为道路图像所在的坐标系。估计位姿为车辆在世界坐标系中的位姿,根据该估计位姿的取值,能够确定世界坐标系与相机坐标系之间的转换矩阵。The camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located. The estimated pose is the pose of the vehicle in the world coordinate system. According to the value of the estimated pose, the conversion matrix between the world coordinate system and the camera coordinate system can be determined.
本实施方式中,将目标地图点映射至图像坐标系中时,可以先按照以下公式将该目标地图点转换至相机坐标系中:In this embodiment, when the target map point is mapped to the image coordinate system, the target map point can be converted to the camera coordinate system according to the following formula:
Figure PCTCN2019113488-appb-000001
Figure PCTCN2019113488-appb-000001
其中,p c为目标地图点在相机坐标系中的位置,p w为目标地图点在世界坐标系中的位置,
Figure PCTCN2019113488-appb-000002
为世界坐标系与车体坐标系之间的转换关系,
Figure PCTCN2019113488-appb-000003
为车体坐标系与相机坐标系之间的转换矩阵,
Figure PCTCN2019113488-appb-000004
表示世界坐标系与相机坐标系之间的转换矩阵。再根据相机设备的投影模型将相机坐标系中的坐标转换至图像坐标系中,得到目标地图点的像素坐标u=π(p c)。其中,π(.)表示相机设备的投影模型,u表示图像坐标系中的目标地图点。
Among them, p c is the position of the target map point in the camera coordinate system, and p w is the position of the target map point in the world coordinate system,
Figure PCTCN2019113488-appb-000002
Is the conversion relationship between the world coordinate system and the car body coordinate system,
Figure PCTCN2019113488-appb-000003
Is the conversion matrix between the vehicle body coordinate system and the camera coordinate system,
Figure PCTCN2019113488-appb-000004
Represents the conversion matrix between the world coordinate system and the camera coordinate system. Then, according to the projection model of the camera device, the coordinates in the camera coordinate system are converted to the image coordinate system to obtain the pixel coordinate u=π(p c ) of the target map point. Among them, π(.) represents the projection model of the camera device, and u represents the target map point in the image coordinate system.
实施方式二,根据估计位姿的取值,确定世界坐标系与相机坐标系之间的转换矩阵,根据该转换矩阵,以及相机坐标系与图像坐标系之间的投影关系,将边缘特征图中的点映射至世界坐标系中,得到边缘特征图中的点的第二映射位置,计算第二映射位置与目标地图点在世界坐标系中的位置之间的投影差异。The second embodiment is to determine the conversion matrix between the world coordinate system and the camera coordinate system according to the value of the estimated pose, and according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, the edge feature map The point of is mapped to the world coordinate system to obtain the second mapping position of the point in the edge feature map, and the projection difference between the second mapping position and the position of the target map point in the world coordinate system is calculated.
综上,本实施例中,根据世界坐标系、相机坐标系和图像坐标系之间的相互转换关系,可以将目标地图点的位置信息映射至图像坐标系,或者将边缘特征图中的点映射至世界坐标系中,为确定映射差异提供了具体的实施方式。In summary, in this embodiment, according to the mutual conversion relationship between the world coordinate system, the camera coordinate system, and the image coordinate system, the location information of the target map point can be mapped to the image coordinate system, or the point in the edge feature map can be mapped In the world coordinate system, a specific implementation method is provided for determining the mapping difference.
在本发明的另一实施例中,基于图1所示实施例,步骤S130,根据初始定位位姿,从预设地图中确定与道路图像对应的目标地图点的步骤,包括步骤1b和2b。In another embodiment of the present invention, based on the embodiment shown in FIG. 1, step S130, the step of determining the target map point corresponding to the road image from the preset map according to the initial positioning pose, includes steps 1b and 2b.
步骤1b:以初始定位位姿在预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图点作为待选地图点。其中,预设距离可以为根据经验确定的距离值。确定的待选地图点可以为多个。Step 1b: Take the position of the initial positioning pose in the preset map as the center, and use the preset distance as the radius to determine the map point contained in the sphere as the candidate map point. Wherein, the preset distance may be a distance value determined based on experience. There can be multiple determined map points to be selected.
步骤2b:从各个待选地图点中筛选处于相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点。Step 2b: Filter the map points within the collection range of the camera device from each to-be-selected map point to obtain the target map point corresponding to the road image.
经过上述筛选过程得到的目标地图点,是可能在道路图像中能够观测到的地图点。The target map points obtained through the above-mentioned screening process are map points that may be observed in the road image.
综上,本实施例中,针对以初始定位位姿在预设地图中的位置为中心的球内的地图点,将处于相机设备的采集范围内的地图点,筛选为目标地图点,这样能够从预设地图中选择有效的地图点,提高确定的车辆定位位姿的准确性。In summary, in this embodiment, for the map points in the ball centered on the position of the initial positioning pose in the preset map, the map points within the collection range of the camera device are filtered as the target map points. Select effective map points from the preset maps to improve the accuracy of the determined vehicle positioning pose.
在本发明的另一实施例中,基于图1所示实施例,步骤2b,从各个待选地图点中筛选处于相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点的步骤,包括以下步骤2b-1~2b-3。In another embodiment of the present invention, based on the embodiment shown in FIG. 1, in step 2b, the map points within the collection range of the camera device are selected from the map points to be selected to obtain the target map point corresponding to the road image The steps include the following steps 2b-1 to 2b-3.
步骤2b-1:根据初始定位位姿,确定世界坐标系与相机坐标系之间的转换矩阵。其中,相机坐标系为相机设备所在的三维坐标系。Step 2b-1: Determine the conversion matrix between the world coordinate system and the camera coordinate system according to the initial positioning pose. Among them, the camera coordinate system is the three-dimensional coordinate system where the camera device is located.
步骤2b-2:根据该转换矩阵,将各个待选地图点映射至相机坐标系中,得到各个待选地图点的第三映射位置。Step 2b-2: According to the conversion matrix, map each to-be-selected map point to the camera coordinate system to obtain the third mapping position of each to-be-selected map point.
具体的,本步骤可以按照以下公式,将各个待选地图点映射至相机坐标系中:Specifically, in this step, the map points to be selected can be mapped to the camera coordinate system according to the following formula:
p b=Tp w,p∈A p b =Tp w ,p∈A
其中,p b为各个待选地图点在相机坐标系中的第三映射位置,p w为各个待选地图点在世界坐标系中的位置信息,T为上述转换矩阵,p为任一个待选地图点,A为待选地图点组成的点集。 Among them, p b is the third mapping position of each candidate map point in the camera coordinate system, p w is the position information of each candidate map point in the world coordinate system, T is the above conversion matrix, and p is any candidate Map points, A is a point set composed of map points to be selected.
由于相机设备固定于车辆中,因此可以根据相机坐标系与车体坐标系之间已知的转换关系,将相机坐标系替换为车体坐标系对待选地图点进行筛选。Since the camera device is fixed in the vehicle, the map points to be selected can be screened by replacing the camera coordinate system with the vehicle body coordinate system according to the known conversion relationship between the camera coordinate system and the vehicle body coordinate system.
步骤2b-3:根据第三映射位置处于相机设备在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与道路图像对应的目标地图点。Step 2b-3: According to the filtering condition that the third mapping position is within the collection range of the camera device in the vertical height direction, the target map points corresponding to the road image are filtered from each candidate map point.
其中,相机设备在竖直高度方向上的采集范围可以表示为z轴的范围[z1,z2]。例如,[z1,z2]可以为[-0.1m,4m]。具体的,本步骤可以将z轴的取值处于该采集范围内的待选地图点确定为目标地图点。筛选得到的目标地图点仍然采用世界坐标系的坐标表示。Wherein, the acquisition range of the camera device in the vertical height direction can be expressed as the range of the z axis [z1, z2]. For example, [z1, z2] can be [-0.1m, 4m]. Specifically, in this step, the map point to be selected whose value of the z-axis is within the collection range may be determined as the target map point. The filtered target map points are still expressed in the coordinates of the world coordinate system.
本步骤具体可以为,将第三映射位置处于相机设备在竖直高度方向上的采集范围内的待选地图点,筛选为与道路图像对应的目标地图点。This step may specifically include filtering the candidate map points whose third mapping position is within the collection range of the camera device in the vertical height direction as target map points corresponding to the road image.
综上,本实施例根据相机设备在高度方向上的范围对待选地图点进行筛选,得到目标地图点,能够从待选地图点中过滤掉次高度范围之外的地图点。In summary, this embodiment filters the map points to be selected according to the range of the camera device in the height direction to obtain the target map points, which can filter out the map points outside the secondary height range from the map points to be selected.
在本发明的另一实施例中,基于图1所示实施例,确定的待选地图点包括待选地图点的坐标位置和法向量。每个待选地图点的位置信息包括坐标位置和法向量。步骤2b,从各个待选地图点中筛选处于相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点的步骤,包括以下步骤2b-4~2b-6。In another embodiment of the present invention, based on the embodiment shown in FIG. 1, the determined map point to be selected includes the coordinate position and normal vector of the map point to be selected. The location information of each map point to be selected includes coordinate location and normal vector. Step 2b, screening the map points within the collection range of the camera device from each candidate map point to obtain the target map point corresponding to the road image, including the following steps 2b-4 to 2b-6.
步骤2b-4:根据各个待选地图点的坐标位置,确定相机设备与各个待选地图点之间的连线。Step 2b-4: Determine the connection line between the camera device and each candidate map point according to the coordinate position of each candidate map point.
待选地图点的坐标位置为世界坐标系中的位置。相机设备在世界坐标系中的位置,可以通过初始定位位姿确定。The coordinate position of the map point to be selected is the position in the world coordinate system. The position of the camera device in the world coordinate system can be determined by the initial positioning pose.
步骤2b-5:计算各个连线与对应的待选地图点的法向量之间的夹角。Step 2b-5: Calculate the angle between each line and the normal vector of the corresponding map point to be selected.
例如,四个待选地图点A、B、C、D,与相机设备O的连线分别为AO、BO、CO、DO,计算连线AO与待选地图点A的法向量之间的夹角,计算连线BO与待选地图点B的法向量之间的夹角,计算连线CO与待选地图点C的法向量之间的夹角,计算连线DO与待选地图点D的法向量之间的夹角,得到针对四个待选地图点的四个夹角。For example, the four candidate map points A, B, C, and D are connected to the camera device O as AO, BO, CO, and DO, respectively. Calculate the clip between the connection AO and the normal vector of the candidate map point A Angle, calculate the angle between the line BO and the normal vector of the candidate map point B, calculate the angle between the line CO and the normal vector of the candidate map point C, calculate the line DO and the candidate map point D The angles between the normal vectors of, get four angles for the four candidate map points.
步骤2b-6:根据上述夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与道路图像对应的目标地图点。Step 2b-6: According to the above-mentioned screening condition that the included angle is within the preset included angle range, the target map points corresponding to the road image are filtered from each candidate map point.
本步骤具体可以包括,将夹角处于预设夹角范围内的待选地图点,筛选为与道路图像对应的目标 地图点。其中,预设夹角范围可以为预先根据经验确定。This step may specifically include screening the candidate map points whose included angles are within a preset included angle range as target map points corresponding to the road image. Among them, the preset included angle range can be determined in advance based on experience.
参见图2,该图2为本实施例提供的相机设备与地图点的法向量之间夹角的一种示意图。其中,入射光从地图点发射出,投射至相机设备的光心,并成像于成像平面,得到道路图像。入射光所在的线可以作为相机设备与地图点之间的连线所在的线。地图点的法向量垂直于所在平面。当相机设备与地图点的相对位置不同时,入射光与法向量的夹角不同。从图2中可见,夹角1对应的地图点能够被相机设备采集到,夹角2对应的地图点不能够被相机设备采集到。因此,合理设置预设夹角范围,能够从大量待选地图点中筛选出能够被相机设备采集到的地图点,得到目标地图点。Refer to FIG. 2, which is a schematic diagram of the angle between the camera device and the normal vector of the map point provided in this embodiment. Among them, the incident light is emitted from the map point, projected to the optical center of the camera device, and imaged on the imaging plane to obtain a road image. The line where the incident light is located can be used as the line where the connecting line between the camera device and the map point is located. The normal vector of the map point is perpendicular to the plane. When the relative positions of the camera device and the map point are different, the angle between the incident light and the normal vector is different. It can be seen from FIG. 2 that the map point corresponding to the angle 1 can be collected by the camera device, and the map point corresponding to the angle 2 cannot be collected by the camera device. Therefore, a reasonable setting of the preset angle range can filter out the map points that can be collected by the camera device from a large number of map points to be selected, and obtain the target map point.
综上,本实施例中按照待选地图点的法向量对待选地图点进行筛选,过滤掉相机观测不到的地图点,提高目标地图点的精准性。To sum up, in this embodiment, the map points to be selected are filtered according to the normal vector of the map points to be selected, and the map points that cannot be observed by the camera are filtered out, thereby improving the accuracy of the target map points.
在本发明的另一实施例中,基于图1所示实施例,预设地图中的各个地图点采用以下步骤1c~4c构建。In another embodiment of the present invention, based on the embodiment shown in FIG. 1, each map point in the preset map is constructed using the following steps 1c to 4c.
步骤1c:获取样本道路图像,并根据预设的边缘强度,提取样本道路图像的样本边缘特征图。Step 1c: Obtain a sample road image, and extract a sample edge feature map of the sample road image according to the preset edge strength.
本步骤中提取样本边缘特征图,可以参见步骤S140。For extracting the edge feature map of the sample in this step, refer to step S140.
步骤2c:根据运动检测设备采集的数据,确定与样本道路图像对应的样本定位位姿。Step 2c: Determine the sample positioning pose corresponding to the sample road image according to the data collected by the motion detection device.
其中,样本定位位姿为世界坐标系中的位姿。样本定位位姿可以认为是当前时刻车辆的定位位姿,可以根据该样本定位位姿构建预设地图中地图点的位置。Among them, the sample positioning pose is the pose in the world coordinate system. The sample positioning pose can be considered as the positioning pose of the vehicle at the current moment, and the position of the map point in the preset map can be constructed based on the sample positioning pose.
步骤3c:基于三维重建算法和上述样本定位位姿,确定样本边缘特征图中每个点在世界坐标系中的位置信息。Step 3c: Determine the position information of each point in the world coordinate system in the sample edge feature map based on the three-dimensional reconstruction algorithm and the aforementioned sample positioning pose.
本步骤中,确定样本边缘特征图中每个点在世界坐标系中的位置信息时,可以包括:In this step, when determining the position information of each point in the sample edge feature map in the world coordinate system, it may include:
基于三维重建算法,确定样本边缘特征图中每个点在相机坐标系中的三维坐标,根据样本定位位姿确定相机坐标系与世界坐标系之间的转换矩阵,根据转换矩阵,对上述三维坐标进行转换,得到样本边缘特征图中每个点在世界坐标系中的位置信息。Based on the three-dimensional reconstruction algorithm, determine the three-dimensional coordinates of each point in the sample edge feature map in the camera coordinate system, determine the conversion matrix between the camera coordinate system and the world coordinate system according to the sample positioning pose, and calculate the above three-dimensional coordinates according to the conversion matrix. After conversion, the position information of each point in the sample edge feature map in the world coordinate system is obtained.
步骤4c:按照预设点密度从样本边缘特征图的各个点中选择地图点,将各个地图点在世界坐标系中的位置信息添加至预设地图。Step 4c: Select a map point from each point of the sample edge feature map according to the preset point density, and add the position information of each map point in the world coordinate system to the preset map.
本步骤具体可以包括:根据具有预设体素尺寸的八叉树算法,在预设地图中构建八叉树立方体网格;针对每个八叉树立方体网格,从处于八叉树立方体网格中的样本边缘特征图的点中选择一个点,作为与八叉树立方体网格对应的地图点。This step may specifically include: constructing an octree cube grid in a preset map according to an octree algorithm with a preset voxel size; for each octree cube grid, starting from the octree cube grid Select a point from the points in the sample edge feature map as the map point corresponding to the octree cube grid.
按照预设点密度从样本边缘特征图的各个点选择的地图点,在三维的世界坐标系中形成点云数据。The map points selected from each point of the sample edge feature map according to the preset point density form point cloud data in a three-dimensional world coordinate system.
综上,本实施例中,在构建预设地图时,从样本道路图像中提取结构化的样本边缘特征图,从样本边缘特征图中提取点云数据用于地图构建,能够得到更加稠密的地图信息,增加预设地图中的有效信息量。In summary, in this embodiment, when constructing a preset map, a structured sample edge feature map is extracted from the sample road image, and point cloud data is extracted from the sample edge feature map for map construction, and a denser map can be obtained. Information, increase the amount of effective information in the preset map.
图3为本发明实施例提供的基于视觉的车辆定位方法的一种框架示意图。其中,本框架包括前端和重定位端。前端的运动检测设备可以包括IMU或轮速计等智能车辆常用传感器。前端的作用为,根据车辆的上一定位位姿以及各种传感器数据,推测当前时刻车辆的初始定位位姿,并将初始定位位姿输入地图管理器和MVS定位优化器。初始定位位姿可以理解为预测的全局位姿。前端在推测车辆的初始定位位姿时,还可以基于道路图像进行。FIG. 3 is a schematic diagram of a framework of a vision-based vehicle positioning method provided by an embodiment of the present invention. Among them, the framework includes a front end and a relocation end. The front-end motion detection equipment may include sensors commonly used in smart vehicles such as IMU or wheel speedometer. The function of the front end is to estimate the initial positioning pose of the vehicle at the current moment according to the previous positioning pose of the vehicle and various sensor data, and input the initial positioning pose into the map manager and the MVS positioning optimizer. The initial positioning pose can be understood as the predicted global pose. When the front end estimates the initial positioning pose of the vehicle, it can also be based on road images.
重定位端包括地图管理器、边缘检测器和MVS定位优化器。地图管理器载入了所在环境的预设地图,并且由八叉树管理地图。地图管理器可以根据初始定位位姿从预设地图中查询可能被相机观测到的地图点。地图管理器还可以根据相机设备的采集范围对查询到的地图点进行筛选,去除超出该范围的地图点。地图管理器将筛选后得到的目标地图点输入MVS定位优化器。The relocation terminal includes map manager, edge detector and MVS location optimizer. The map manager loads the preset map of the environment, and the map is managed by the octree. The map manager can query the map points that may be observed by the camera from the preset map according to the initial positioning pose. The map manager can also filter the queried map points according to the collection range of the camera device, and remove the map points beyond the range. The map manager inputs the filtered target map points into the MVS positioning optimizer.
边缘检测器以道路图像作为输入,输出道路图像的边缘响应,即边缘特征图,并将边缘特征图输入值MVS定位优化器。The edge detector takes the road image as input, outputs the edge response of the road image, that is, the edge feature map, and inputs the value of the edge feature map to the MVS positioning optimizer.
MVS定位优化器在接收到输入的初始定位位姿、目标地图点和边缘特征图之后,采用迭代优化求解的方式,根据目标地图点与边缘特征图中的点之间的映射差异,确定车辆的全局位姿。车辆的全局 位姿即为上述提到的车辆定位位姿。After receiving the input initial positioning pose, target map point, and edge feature map, the MVS positioning optimizer adopts an iterative optimization solution method to determine the vehicle's performance based on the mapping difference between the target map point and the point in the edge feature map. Global pose. The global pose of the vehicle is the vehicle positioning pose mentioned above.
图4为本发明实施例提供的基于视觉的车辆定位装置的一种结构示意图。该装置实施例与图1所示实施例相对应,该装置应用于电子设备。该装置包括:FIG. 4 is a schematic structural diagram of a vision-based vehicle positioning device provided by an embodiment of the present invention. This device embodiment corresponds to the embodiment shown in FIG. 1, and the device is applied to electronic equipment. The device includes:
道路图像获取模块410,被配置为获取相机设备采集的道路图像;The road image acquisition module 410 is configured to acquire the road image collected by the camera device;
初始位姿确定模块420,被配置为根据运动检测设备采集的数据,确定与道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;The initial pose determination module 420 is configured to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
地图点确定模块430,被配置为根据初始定位位姿,从预设地图中确定与道路图像对应的目标地图点;其中,预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到;The map point determination module 430 is configured to determine the target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: pre-compare the edge features of the sample road image The points in the figure are obtained after three-dimensional reconstruction and selection;
边缘特征提取模块440,被配置为根据预设的边缘强度,提取道路图像的边缘特征图;The edge feature extraction module 440 is configured to extract the edge feature map of the road image according to the preset edge strength;
车辆位姿确定模块450,被配置为根据初始定位位姿,确定目标地图点与边缘特征图中的点之间的映射差异,根据投影差异确定车辆定位位姿。The vehicle pose determination module 450 is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
在本发明的另一实施例中,基于图4所示实施例,初始位姿确定模块420,具体被配置为:In another embodiment of the present invention, based on the embodiment shown in FIG. 4, the initial pose determination module 420 is specifically configured as follows:
以初始定位位姿为估计位姿的初始值,根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异;Take the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the target map point mapped to the same coordinate system Mapping difference between the points in the edge feature map;
当映射差异大于预设差异阈值时,根据映射差异修改估计位姿的取值,返回执行根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中的操作;When the mapping difference is greater than the preset difference threshold, modify the value of the estimated pose according to the mapping difference, and return to perform the operation of mapping the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose ;
当映射差异小于预设差异阈值时,根据估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
在本发明的另一实施例中,基于图4所示实施例,初始位姿确定模块420,根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异时,包括:In another embodiment of the present invention, based on the embodiment shown in FIG. 4, the initial pose determination module 420 maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose , When determining the mapping difference between the target map point mapped to the same coordinate system and the point in the edge feature map, including:
根据估计位姿的取值,确定世界坐标系与相机坐标系之间的转换矩阵,根据转换矩阵,以及相机坐标系与图像坐标系之间的投影关系,将目标地图点映射至图像坐标系中,得到目标地图点的第一映射位置,计算第一映射位置与边缘特征图中的点在图像坐标系中的位置之间的投影差异;其中,相机坐标系为相机设备所在的三维坐标系,图像坐标系为道路图像所在的坐标系;According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system, and map the target map point to the image coordinate system according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system , The first mapping position of the target map point is obtained, and the projection difference between the first mapping position and the position of the point in the edge feature map in the image coordinate system is calculated; the camera coordinate system is the three-dimensional coordinate system where the camera device is located, The image coordinate system is the coordinate system where the road image is located;
或者,or,
根据估计位姿的取值,确定世界坐标系与相机坐标系之间的转换矩阵;根据转换矩阵,以及相机坐标系与图像坐标系之间的投影关系,将边缘特征图中的点映射至世界坐标系中,得到边缘特征图中的点的第二映射位置,计算第二映射位置与目标地图点在世界坐标系中的位置之间的投影差异。According to the estimated pose value, determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, map the points in the edge feature map to the world In the coordinate system, the second mapping position of the point in the edge feature map is obtained, and the projection difference between the second mapping position and the position of the target map point in the world coordinate system is calculated.
在本发明的另一实施例中,基于图4所示实施例,地图点确定模块430,具体被配置为:In another embodiment of the present invention, based on the embodiment shown in FIG. 4, the map point determination module 430 is specifically configured as:
以初始定位位姿在预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图点作为待选地图点;Take the position of the initial positioning pose on the preset map as the center, and use the preset distance as the radius to determine the map point contained in the sphere as the candidate map point;
从各个待选地图点中筛选处于相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点。The map points within the collection range of the camera device are filtered from each candidate map point to obtain the target map point corresponding to the road image.
在本发明的另一实施例中,基于图4所示实施例,地图点确定模块430,从各个待选地图点中筛选处于相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点时,包括:In another embodiment of the present invention, based on the embodiment shown in FIG. 4, the map point determination module 430 selects the map points within the collection range of the camera device from the map points to be selected to obtain the target corresponding to the road image When the map points, include:
根据初始定位位姿,确定世界坐标系与相机坐标系之间的转换矩阵;其中,相机坐标系为相机设备所在的三维坐标系;According to the initial positioning pose, determine the conversion matrix between the world coordinate system and the camera coordinate system; where the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
根据转换矩阵,将各个待选地图点映射至相机坐标系中,得到各个待选地图点的第三映射位置;According to the conversion matrix, map each candidate map point to the camera coordinate system to obtain the third mapping position of each candidate map point;
根据第三映射位置处于相机设备在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与道路图像对应的目标地图点。According to the screening condition that the third mapping position is within the collection range of the camera device in the vertical height direction, a target map point corresponding to the road image is obtained by screening from each candidate map point.
在本发明的另一实施例中,基于图4所示实施例,确定的待选地图点包括待选地图点的坐标位置 和法向量;地图点确定模块430,从各个待选地图点中筛选处于相机设备的采集范围内的地图点,得到与道路图像对应的目标地图点时,包括:In another embodiment of the present invention, based on the embodiment shown in FIG. 4, the determined map point to be selected includes the coordinate position and normal vector of the map point to be selected; the map point determination module 430 filters each map point to be selected When the map points within the collection range of the camera device are obtained, the target map points corresponding to the road image include:
根据各个待选地图点的坐标位置,确定相机设备与各个待选地图点之间的连线;Determine the connection line between the camera device and each candidate map point according to the coordinate position of each candidate map point;
计算各个连线与对应的待选地图点的法向量之间的夹角;Calculate the angle between each line and the normal vector of the corresponding map point to be selected;
根据夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与道路图像对应的目标地图点。According to the screening condition that the included angle is within the preset included angle range, the target map point corresponding to the road image is filtered from each candidate map point.
在本发明的另一实施例中,基于图4所示实施例,该装置还包括:地图点构建模块(图中未示出),被配置为采用以下操作构建预设地图中的各个地图点:In another embodiment of the present invention, based on the embodiment shown in FIG. 4, the device further includes: a map point construction module (not shown in the figure), configured to use the following operations to construct each map point in the preset map :
获取样本道路图像,并根据预设的边缘强度,提取样本道路图像的样本边缘特征图;Obtain the sample road image, and extract the sample edge feature map of the sample road image according to the preset edge strength;
根据运动检测设备采集的数据,确定与样本道路图像对应的样本定位位姿;其中,样本定位位姿为世界坐标系中的位姿;According to the data collected by the motion detection equipment, determine the sample positioning pose corresponding to the sample road image; where the sample positioning pose is the pose in the world coordinate system;
基于三维重建算法和样本定位位姿,确定样本边缘特征图中每个点在世界坐标系中的位置信息;Based on the three-dimensional reconstruction algorithm and the sample positioning pose, determine the position information of each point in the sample edge feature map in the world coordinate system;
按照预设点密度从样本边缘特征图的各个点中选择地图点,将各个地图点在世界坐标系中的位置信息添加至预设地图。Select map points from each point in the sample edge feature map according to the preset point density, and add the location information of each map point in the world coordinate system to the preset map.
上述装置实施例与方法实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。装置实施例是基于方法实施例得到的,具体的说明可以参见方法实施例部分,此处不再赘述。The foregoing device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment. For specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and the specific description can be found in the method embodiment part, which is not repeated here.
图5为本发明实施例提供的车载终端的一种结构示意图。该车载终端包括:处理器510、相机设备520和运动检测设备530;处理器510包括:Fig. 5 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention. The vehicle-mounted terminal includes: a processor 510, a camera device 520, and a motion detection device 530; the processor 510 includes:
道路图像获取模块(图中未示出),用于获取相机设备520采集的道路图像;Road image acquisition module (not shown in the figure), used to acquire road images collected by the camera device 520;
初始位姿确定模块(图中未示出),用于根据运动检测设备530采集的数据,确定与道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;The initial pose determination module (not shown in the figure) is used to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device 530; where the initial positioning pose is the world coordinate system where the preset map is located Pose in
地图点确定模块(图中未示出),用于根据初始定位位姿,从预设地图中确定与道路图像对应的目标地图点;其中,预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到;The map point determination module (not shown in the figure) is used to determine the target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: The points in the edge feature map of the road image are obtained after three-dimensional reconstruction and selection;
边缘特征提取模块(图中未示出),用于根据预设的边缘强度,提取道路图像的边缘特征图;The edge feature extraction module (not shown in the figure) is used to extract the edge feature map of the road image according to the preset edge strength;
车辆位姿确定模块(图中未示出),用于根据初始定位位姿,确定目标地图点与边缘特征图中的点之间的映射差异,根据投影差异确定车辆定位位姿。The vehicle pose determination module (not shown in the figure) is used to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
在本发明的另一实施例中,基于图5所示实施例,初始位姿确定模块,具体用于:In another embodiment of the present invention, based on the embodiment shown in FIG. 5, the initial pose determination module is specifically used for:
以初始定位位姿为估计位姿的初始值,根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异;Take the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the target map point mapped to the same coordinate system Mapping difference between the points in the edge feature map;
当映射差异大于预设差异阈值时,根据映射差异修改估计位姿的取值,返回执行根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中的操作;When the mapping difference is greater than the preset difference threshold, modify the value of the estimated pose according to the mapping difference, and return to perform the operation of mapping the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose ;
当映射差异小于预设差异阈值时,根据估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
在本发明的另一实施例中,基于图5所示实施例,初始位姿确定模块,根据估计位姿的取值,将目标地图点与边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的目标地图点与边缘特征图中的点之间的映射差异时,包括:In another embodiment of the present invention, based on the embodiment shown in FIG. 5, the initial pose determination module maps the target map point and the point in the edge feature map to the same coordinate system according to the value of the estimated pose, When determining the mapping difference between the target map point mapped to the same coordinate system and the point in the edge feature map, it includes:
根据估计位姿的取值,确定世界坐标系与相机坐标系之间的转换矩阵,根据转换矩阵,以及相机坐标系与图像坐标系之间的投影关系,将目标地图点映射至图像坐标系中,得到目标地图点的第一映射位置,计算第一映射位置与边缘特征图中的点在图像坐标系中的位置之间的投影差异;其中,相机坐标系为相机设备所在的三维坐标系,图像坐标系为道路图像所在的坐标系;According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system, and map the target map point to the image coordinate system according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system , The first mapping position of the target map point is obtained, and the projection difference between the first mapping position and the position of the point in the edge feature map in the image coordinate system is calculated; the camera coordinate system is the three-dimensional coordinate system where the camera device is located, The image coordinate system is the coordinate system where the road image is located;
或者,or,
根据估计位姿的取值,确定世界坐标系与相机坐标系之间的转换矩阵;根据转换矩阵,以及相机 坐标系与图像坐标系之间的投影关系,将边缘特征图中的点映射至世界坐标系中,得到边缘特征图中的点的第二映射位置,计算第二映射位置与目标地图点在世界坐标系中的位置之间的投影差异。According to the estimated pose value, determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, map the points in the edge feature map to the world In the coordinate system, the second mapping position of the point in the edge feature map is obtained, and the projection difference between the second mapping position and the position of the target map point in the world coordinate system is calculated.
在本发明的另一实施例中,基于图5所示实施例,地图点确定模块,具体用于:In another embodiment of the present invention, based on the embodiment shown in FIG. 5, the map point determination module is specifically used for:
以初始定位位姿在预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图点作为待选地图点;Take the position of the initial positioning pose on the preset map as the center, and use the preset distance as the radius to determine the map point contained in the sphere as the candidate map point;
从各个待选地图点中筛选处于相机设备520的采集范围内的地图点,得到与道路图像对应的目标地图点。The map points within the collection range of the camera device 520 are filtered from each candidate map point to obtain the target map point corresponding to the road image.
在本发明的另一实施例中,基于图5所示实施例,地图点确定模块,从各个待选地图点中筛选处于相机设备520的采集范围内的地图点,得到与道路图像对应的目标地图点时,包括:In another embodiment of the present invention, based on the embodiment shown in FIG. 5, the map point determination module selects the map points within the collection range of the camera device 520 from the map points to be selected to obtain the target corresponding to the road image When the map points, include:
根据初始定位位姿,确定世界坐标系与相机坐标系之间的转换矩阵;其中,相机坐标系为相机设备520所在的三维坐标系;According to the initial positioning pose, determine the conversion matrix between the world coordinate system and the camera coordinate system; where the camera coordinate system is the three-dimensional coordinate system where the camera device 520 is located;
根据转换矩阵,将各个待选地图点映射至相机坐标系中,得到各个待选地图点的第三映射位置;According to the conversion matrix, map each candidate map point to the camera coordinate system to obtain the third mapping position of each candidate map point;
根据第三映射位置处于相机设备520在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与道路图像对应的目标地图点。According to the filtering condition that the third mapping position is within the collection range of the camera device 520 in the vertical height direction, the target map point corresponding to the road image is filtered from each candidate map point.
在本发明的另一实施例中,基于图5所示实施例,确定的待选地图点包括待选地图点的坐标位置和法向量;地图点确定模块,从各个待选地图点中筛选处于相机设备520的采集范围内的地图点,得到与道路图像对应的目标地图点时,包括:In another embodiment of the present invention, based on the embodiment shown in FIG. 5, the determined map point to be selected includes the coordinate position and normal vector of the map point to be selected; the map point determination module filters the map points to be When the map points within the collection range of the camera device 520 obtain the target map point corresponding to the road image, it includes:
根据各个待选地图点的坐标位置,确定相机设备520与各个待选地图点之间的连线;Determine the connection line between the camera device 520 and each candidate map point according to the coordinate position of each candidate map point;
计算各个连线与对应的待选地图点的法向量之间的夹角;Calculate the angle between each line and the normal vector of the corresponding map point to be selected;
根据夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与道路图像对应的目标地图点。According to the screening condition that the included angle is within the preset included angle range, the target map point corresponding to the road image is filtered from each candidate map point.
在本发明的另一实施例中,基于图5所示实施例,处理器510还包括:地图点构建模块,用于采用以下操作构建预设地图中的各个地图点:In another embodiment of the present invention, based on the embodiment shown in FIG. 5, the processor 510 further includes: a map point construction module, configured to construct each map point in the preset map by using the following operations:
获取样本道路图像,并根据预设的边缘强度,提取样本道路图像的样本边缘特征图;Obtain the sample road image, and extract the sample edge feature map of the sample road image according to the preset edge strength;
根据运动检测设备采集的数据,确定与样本道路图像对应的样本定位位姿;其中,样本定位位姿为世界坐标系中的位姿;According to the data collected by the motion detection equipment, determine the sample positioning pose corresponding to the sample road image; where the sample positioning pose is the pose in the world coordinate system;
基于三维重建算法和样本定位位姿,确定样本边缘特征图中每个点在世界坐标系中的位置信息;Based on the three-dimensional reconstruction algorithm and the sample positioning pose, determine the position information of each point in the sample edge feature map in the world coordinate system;
按照预设点密度从样本边缘特征图的各个点中选择地图点,将各个地图点在世界坐标系中的位置信息添加至预设地图。Select map points from each point in the sample edge feature map according to the preset point density, and add the location information of each map point in the world coordinate system to the preset map.
该终端实施例与图1所示方法实施例是基于同一发明构思得到的实施例,相关之处可以相互参照。上述终端实施例与方法实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。This embodiment of the terminal and the embodiment of the method shown in FIG. 1 are embodiments based on the same inventive concept, and relevant points may be referred to each other. The foregoing terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment. For specific description, refer to the method embodiment.
本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。Those of ordinary skill in the art can understand that the drawings are only schematic diagrams of an embodiment, and the modules or processes in the drawings are not necessarily necessary for implementing the present invention.
本领域普通技术人员可以理解:实施例中的装置中的模块可以按照实施例描述分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。A person of ordinary skill in the art can understand that the modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes. The modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the foregoing embodiments are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

  1. 一种基于视觉的车辆定位方法,其特征在于,包括:A vision-based vehicle positioning method is characterized in that it includes:
    获取相机设备采集的道路图像;Obtain road images collected by camera equipment;
    根据运动检测设备采集的数据,确定与所述道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;Determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
    根据所述初始定位位姿,从所述预设地图中确定与所述道路图像对应的目标地图点;其中,所述预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到;According to the initial positioning pose, a target map point corresponding to the road image is determined from the preset map; wherein, each map point in the preset map is: an edge feature map of the sample road image in advance The points in are obtained after 3D reconstruction and selection;
    根据预设的边缘强度,提取所述道路图像的边缘特征图;Extracting the edge feature map of the road image according to the preset edge strength;
    根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述投影差异确定车辆定位位姿。According to the initial positioning pose, the mapping difference between the target map point and the point in the edge feature map is determined, and the vehicle positioning pose is determined according to the projection difference.
  2. 如权利要求1所述的方法,其特征在于,所述根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述映射差异确定车辆定位位姿的步骤,包括:The method according to claim 1, wherein said determining the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determining according to the mapping difference The steps for vehicle positioning and pose include:
    以所述初始定位位姿为估计位姿的初始值,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异;Taking the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to The mapping difference between the target map point and the point in the edge feature map in the same coordinate system;
    当所述映射差异大于预设差异阈值时,根据所述映射差异修改所述估计位姿的取值,返回执行所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中的步骤;When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the Steps of mapping points in the edge feature map to the same coordinate system;
    当所述映射差异小于预设差异阈值时,根据所述估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
  3. 如权利要2所述的方法,其特征在于,所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异的步骤,包括:The method of claim 2, wherein the target map point and the point in the edge feature map are mapped to the same coordinate system according to the value of the estimated pose, and the mapping is determined to be The step of mapping the difference between the target map point and the point in the edge feature map in the same coordinate system includes:
    根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵,根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述目标地图点映射至所述图像坐标系中,得到所述目标地图点的第一映射位置,计算所述第一映射位置与所述边缘特征图中的点在所述图像坐标系中的位置之间的投影差异;其中,所述相机坐标系为所述相机设备所在的三维坐标系,所述图像坐标系为所述道路图像所在的坐标系;According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system. According to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The target map point is mapped to the image coordinate system to obtain a first mapping position of the target map point, and the first mapping position and the position of the point in the edge feature map in the image coordinate system are calculated Projection difference between; wherein the camera coordinate system is the three-dimensional coordinate system where the camera device is located, and the image coordinate system is the coordinate system where the road image is located;
    或者,or,
    根据所述估计位姿的取值,确定所述世界坐标系与相机坐标系之间的转换矩阵;根据所述转换矩阵,以及所述相机坐标系与图像坐标系之间的投影关系,将所述边缘特征图中的点映射至所述世界坐标系中,得到所述边缘特征图中的点的第二映射位置,计算所述第二映射位置与所述目标地图点在所述世界坐标系中的位置之间的投影差异。According to the value of the estimated pose, determine the conversion matrix between the world coordinate system and the camera coordinate system; according to the conversion matrix and the projection relationship between the camera coordinate system and the image coordinate system, The points in the edge feature map are mapped to the world coordinate system to obtain a second mapping position of the points in the edge feature map, and the second mapping position and the target map point are calculated in the world coordinate system The projection difference between the positions in.
  4. 如权利要求1所述的方法,其特征在于,所述根据所述初始定位位姿,从所述预设地图中确定与所述道路图像对应的目标地图点的步骤,包括:The method according to claim 1, wherein the step of determining a target map point corresponding to the road image from the preset map according to the initial positioning pose comprises:
    以所述初始定位位姿在所述预设地图中的位置作为中心,以预设距离作为半径确定的球所包含的地图点作为待选地图点;Taking the position of the initial positioning pose in the preset map as a center, and a map point contained in a sphere determined by using a preset distance as a radius as a candidate map point;
    从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点。The map points within the collection range of the camera device are filtered from each candidate map point to obtain a target map point corresponding to the road image.
  5. 如权利要求4所述的方法,其特征在于,所述从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点的步骤,包括:The method according to claim 4, wherein the step of screening map points within the collection range of the camera device from each candidate map point to obtain a target map point corresponding to the road image, include:
    根据所述初始定位位姿,确定所述世界坐标系与相机坐标系之间的转换矩阵;其中,所述相机坐标系为所述相机设备所在的三维坐标系;Determine the conversion matrix between the world coordinate system and the camera coordinate system according to the initial positioning pose; wherein, the camera coordinate system is the three-dimensional coordinate system where the camera device is located;
    根据所述转换矩阵,将各个待选地图点映射至所述相机坐标系中,得到各个待选地图点的第三映 射位置;According to the conversion matrix, map each candidate map point to the camera coordinate system to obtain the third mapping position of each candidate map point;
    根据所述第三映射位置处于所述相机设备在竖直高度方向上的采集范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the filtering condition that the third mapping position is within the collection range of the camera device in the vertical height direction, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  6. 如权利要求4所述的方法,其特征在于,确定的待选地图点包括待选地图点的坐标位置和法向量;所述从各个待选地图点中筛选处于所述相机设备的采集范围内的地图点,得到与所述道路图像对应的目标地图点的步骤,包括:The method according to claim 4, wherein the determined map points to be selected include the coordinate positions and normal vectors of the map points to be selected; and the selection of each map point to be selected is within the collection range of the camera device The step of obtaining the target map point corresponding to the road image includes:
    根据各个待选地图点的坐标位置,确定所述相机设备与各个待选地图点之间的连线;Determine the connection line between the camera device and each map point to be selected according to the coordinate position of each map point to be selected;
    计算各个连线与对应的待选地图点的法向量之间的夹角;Calculate the angle between each line and the normal vector of the corresponding map point to be selected;
    根据所述夹角处于预设夹角范围内的筛选条件,从各个待选地图点中筛选得到与所述道路图像对应的目标地图点。According to the screening condition that the included angle is within the preset included angle range, a target map point corresponding to the road image is obtained by filtering from each candidate map point.
  7. 如权利要求1所述的方法,其特征在于,所述预设地图中的各个地图点采用以下方式构建:The method according to claim 1, wherein each map point in the preset map is constructed in the following manner:
    获取样本道路图像,并根据预设的边缘强度,提取所述样本道路图像的样本边缘特征图;Acquiring a sample road image, and extracting a sample edge feature map of the sample road image according to a preset edge strength;
    根据运动检测设备采集的数据,确定与所述样本道路图像对应的样本定位位姿;其中,所述样本定位位姿为所述世界坐标系中的位姿;Determine the sample positioning pose corresponding to the sample road image according to the data collected by the motion detection device; wherein the sample positioning pose is the pose in the world coordinate system;
    基于三维重建算法和所述样本定位位姿,确定所述样本边缘特征图中每个点在所述世界坐标系中的位置信息;Determining the position information of each point in the sample edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the sample positioning pose;
    按照预设点密度从所述样本边缘特征图的各个点中选择地图点,将各个地图点在所述世界坐标系中的位置信息添加至所述预设地图。A map point is selected from each point of the sample edge feature map according to a preset point density, and the position information of each map point in the world coordinate system is added to the preset map.
  8. 一种基于视觉的车辆定位装置,其特征在于,包括:A vision-based vehicle positioning device is characterized in that it comprises:
    道路图像获取模块,被配置为获取相机设备采集的道路图像;The road image acquisition module is configured to acquire the road image collected by the camera device;
    初始位姿确定模块,被配置为根据运动检测设备采集的数据,确定与所述道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;The initial pose determination module is configured to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
    地图点确定模块,被配置为根据所述初始定位位姿,从所述预设地图中确定与所述道路图像对应的目标地图点;其中,所述预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重建并选择后得到;The map point determination module is configured to determine a target map point corresponding to the road image from the preset map according to the initial positioning pose; wherein, each map point in the preset map is: Perform three-dimensional reconstruction and selection of points in the edge feature map of the sample road image;
    边缘特征提取模块,被配置为根据预设的边缘强度,提取所述道路图像的边缘特征图;An edge feature extraction module configured to extract the edge feature map of the road image according to a preset edge strength;
    车辆位姿确定模块,被配置为根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述投影差异确定车辆定位位姿。The vehicle pose determination module is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
  9. 如权利要求8所述的装置,其特征在于,所述初始位姿确定模块,具体被配置为:The device according to claim 8, wherein the initial pose determination module is specifically configured to:
    以所述初始定位位姿为估计位姿的初始值,根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中,确定映射至同一坐标系中的所述目标地图点与所述边缘特征图中的点之间的映射差异;Taking the initial positioning pose as the initial value of the estimated pose, and according to the value of the estimated pose, map the target map point and the point in the edge feature map to the same coordinate system, and determine the mapping to The mapping difference between the target map point and the point in the edge feature map in the same coordinate system;
    当所述映射差异大于预设差异阈值时,根据所述映射差异修改所述估计位姿的取值,返回执行所述根据所述估计位姿的取值,将所述目标地图点与所述边缘特征图中的点映射至同一坐标系中的操作;When the mapping difference is greater than a preset difference threshold, modify the value of the estimated pose according to the mapping difference, return to execute the value of the estimated pose, and compare the target map point with the The operation of mapping points in the edge feature map to the same coordinate system;
    当所述映射差异小于预设差异阈值时,根据所述估计位姿的当前取值,确定车辆定位位姿。When the mapping difference is less than the preset difference threshold, the vehicle positioning pose is determined according to the current value of the estimated pose.
  10. 一种车载终端,其特征在于,包括:处理器、相机设备和运动检测设备;所述处理器包括:A vehicle-mounted terminal is characterized by comprising: a processor, a camera device and a motion detection device; the processor includes:
    道路图像获取模块,用于获取相机设备采集的道路图像;Road image acquisition module for acquiring road images collected by camera equipment;
    初始位姿确定模块,用于根据运动检测设备采集的数据,确定与所述道路图像对应的初始定位位姿;其中,初始定位位姿为预设地图所在的世界坐标系中的位姿;The initial pose determination module is used to determine the initial positioning pose corresponding to the road image according to the data collected by the motion detection device; wherein the initial positioning pose is the pose in the world coordinate system where the preset map is located;
    地图点确定模块,用于根据所述初始定位位姿,从预设地图中确定与所述道路图像对应的目标地图点;其中,所述预设地图中的各个地图点为:预先对样本道路图像的边缘特征图中的点进行三维重 建并选择后得到;The map point determination module is configured to determine a target map point corresponding to the road image from a preset map according to the initial positioning pose; wherein, each map point in the preset map is: pre-complying sample roads The points in the edge feature map of the image are obtained after three-dimensional reconstruction and selection;
    边缘特征提取模块,用于根据预设的边缘强度,提取所述道路图像的边缘特征图;The edge feature extraction module is used to extract the edge feature map of the road image according to the preset edge strength;
    车辆位姿确定模块,用于根据所述初始定位位姿,确定所述目标地图点与所述边缘特征图中的点之间的映射差异,根据所述投影差异确定车辆定位位姿。The vehicle pose determination module is configured to determine the mapping difference between the target map point and the point in the edge feature map according to the initial positioning pose, and determine the vehicle positioning pose according to the projection difference.
PCT/CN2019/113488 2019-07-29 2019-10-26 Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal WO2021017211A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910687280.0 2019-07-29
CN201910687280.0A CN112308913B (en) 2019-07-29 2019-07-29 Vehicle positioning method and device based on vision and vehicle-mounted terminal

Publications (1)

Publication Number Publication Date
WO2021017211A1 true WO2021017211A1 (en) 2021-02-04

Family

ID=74229631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113488 WO2021017211A1 (en) 2019-07-29 2019-10-26 Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal

Country Status (2)

Country Link
CN (1) CN112308913B (en)
WO (1) WO2021017211A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113091889A (en) * 2021-02-20 2021-07-09 周春伟 Method and device for measuring road brightness
CN113298910A (en) * 2021-05-14 2021-08-24 阿波罗智能技术(北京)有限公司 Method, apparatus and storage medium for generating traffic sign line map
WO2022242395A1 (en) * 2021-05-20 2022-11-24 北京城市网邻信息技术有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN115952248A (en) * 2022-12-20 2023-04-11 阿波罗智联(北京)科技有限公司 Pose processing method, device, equipment, medium and product of terminal equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
CN107703528A (en) * 2017-09-25 2018-02-16 武汉光庭科技有限公司 Low precision GPS vision positioning method and system is combined in automatic Pilot
CN107704821A (en) * 2017-09-29 2018-02-16 河北工业大学 A kind of vehicle pose computational methods of bend
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
CN109887033A (en) * 2019-03-01 2019-06-14 北京智行者科技有限公司 Localization method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10935978B2 (en) * 2017-10-30 2021-03-02 Nio Usa, Inc. Vehicle self-localization using particle filters and visual odometry
CN108802785B (en) * 2018-08-24 2021-02-02 清华大学 Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN109733383B (en) * 2018-12-13 2021-07-20 初速度(苏州)科技有限公司 Self-adaptive automatic parking method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
CN107703528A (en) * 2017-09-25 2018-02-16 武汉光庭科技有限公司 Low precision GPS vision positioning method and system is combined in automatic Pilot
CN107704821A (en) * 2017-09-29 2018-02-16 河北工业大学 A kind of vehicle pose computational methods of bend
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
CN109887033A (en) * 2019-03-01 2019-06-14 北京智行者科技有限公司 Localization method and device

Also Published As

Publication number Publication date
CN112308913B (en) 2024-03-29
CN112308913A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN110148196B (en) Image processing method and device and related equipment
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN106871906B (en) Navigation method and device for blind person and terminal equipment
US10872246B2 (en) Vehicle lane detection system
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN111830953A (en) Vehicle self-positioning method, device and system
CN112740225B (en) Method and device for determining road surface elements
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN114969221A (en) Method for updating map and related equipment
CN113963254A (en) Vehicle-mounted intelligent inspection method and system integrating target identification
CN115564865A (en) Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle
CN111768489B (en) Indoor navigation map construction method and system
CN115409910A (en) Semantic map construction method, visual positioning method and related equipment
Kiran et al. Automatic hump detection and 3D view generation from a single road image
CN115588047A (en) Three-dimensional target detection method based on scene coding
CN110827340A (en) Map updating method, device and storage medium
CN111860084B (en) Image feature matching and positioning method and device and positioning system
Yang et al. Road detection by RANSAC on randomly sampled patches with slanted plane prior
CN112818866A (en) Vehicle positioning method and device and electronic equipment
KR20220062709A (en) System for detecting disaster situation by clustering of spatial information based an image of a mobile device and method therefor
CN112308904A (en) Vision-based drawing construction method and device and vehicle-mounted terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19940099

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19940099

Country of ref document: EP

Kind code of ref document: A1