WO2021017213A1 - 一种视觉定位效果自检方法及车载终端 - Google Patents

一种视觉定位效果自检方法及车载终端 Download PDF

Info

Publication number
WO2021017213A1
WO2021017213A1 PCT/CN2019/113491 CN2019113491W WO2021017213A1 WO 2021017213 A1 WO2021017213 A1 WO 2021017213A1 CN 2019113491 W CN2019113491 W CN 2019113491W WO 2021017213 A1 WO2021017213 A1 WO 2021017213A1
Authority
WO
WIPO (PCT)
Prior art keywords
positioning
road
error
mapping
road feature
Prior art date
Application number
PCT/CN2019/113491
Other languages
English (en)
French (fr)
Inventor
姜秀宝
Original Assignee
魔门塔(苏州)科技有限公司
北京初速度科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司, 北京初速度科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Priority to DE112019007454.7T priority Critical patent/DE112019007454T5/de
Publication of WO2021017213A1 publication Critical patent/WO2021017213A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the invention relates to the technical field of intelligent driving, in particular to a method for self-checking visual positioning effects and a vehicle-mounted terminal.
  • GNSS Global Navigation Satellite System
  • IMU Inertial Measurement Unit
  • the correspondence between the high-precision map and the road features in the parking lot can usually be established in advance.
  • the camera module collects the road image
  • the road feature in the road image is compared with the road feature in the high-precision map.
  • the road features are matched, and the vehicle's positioning pose in terms of visual positioning is determined according to the matching result.
  • the visual positioning By combining the visual positioning with the trajectory estimated by the IMU, a more precise positioning pose of the vehicle can be obtained.
  • reasons such as occlusion of road features in the road image or equipment failure may cause the positioning result of the visual positioning to be very inaccurate. Therefore, a method for self-checking the visual positioning effect is urgently needed.
  • the invention provides a self-checking method for visual positioning effect and a vehicle-mounted terminal to realize the evaluation of the visual positioning effect.
  • the specific technical solution is as follows.
  • embodiments of the present invention provide a self-checking method for visual positioning effect, including:
  • the preset map and the first positioning pose are obtained.
  • the first positioning error corresponding to the first mapping error is determined as the positioning accuracy of the first positioning pose.
  • the step of determining the first positioning error corresponding to the first mapping error according to the pre-established correspondence between the mapping error and the positioning error in the target map area includes:
  • g 0 ( ⁇ x, ⁇ y) a 0 ⁇ x 2 +b 0 ⁇ x ⁇ y+c 0 ⁇ y 2 +d 0 ⁇ x+e 0 ⁇ y+f 0
  • the a 0 , b 0 , c 0 , d 0 , e 0 , and f 0 are predetermined function coefficients
  • mapping error the mapping error and the positioning error in the target map area:
  • mapping error function When the residual error between the mapping error function and the perturbation mapping errors corresponding to the multiple perturbed positioning poses is minimized based on the preset mapping error function related to the positioning error in the target map area
  • mapping error function of to obtain the functional relationship between the mapping error and the positioning error in the target map area.
  • the step of solving the mapping error function when the residual error between the mapping error function and the disturbance mapping errors corresponding to the plurality of disturbance positioning poses takes a minimum value includes:
  • the method further includes:
  • the method further includes:
  • the positioning quality corresponding to the preset number of road image frames is less than the preset positioning quality, and the positioning accuracy corresponding to the preset number of road image frames is less than the preset positioning accuracy, it is determined that the visual positioning based on the road image is invalid.
  • the step of performing vehicle positioning according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map to obtain the first positioning pose of the vehicle includes:
  • the estimated pose of the vehicle is adjusted, and the estimated pose based on the vehicle is executed to determine the difference between the first road feature and the second road feature Steps of reference mapping error between;
  • the first positioning pose of the vehicle is determined according to the current estimated pose of the vehicle.
  • the step of determining the first mapping error between the first road feature and the second road feature includes:
  • the mapping of the first road feature to the first mapping position in the preset map calculates the first An error between the mapping position and the position of the second road feature in the preset map to obtain the first mapping error
  • an embodiment of the present invention provides a vehicle-mounted terminal, including: a processor and an image acquisition device; the processor includes: a feature acquisition module, a mapping determination module, an area determination module, and an accuracy determination module;
  • the image acquisition device is used to acquire road images
  • the feature acquisition module is used to perform vehicle positioning according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map, and when the first positioning pose of the vehicle is obtained, the A second road feature successfully matched with the first road feature in the preset map;
  • the mapping determination module is configured to determine a first mapping error between the first road feature and the second road feature
  • the area determining module is configured to determine the target map area where the first positioning pose is located from among multiple different map areas included in the preset map;
  • the accuracy determination module is configured to determine the first positioning error corresponding to the first mapping error according to the corresponding relationship between the mapping error and the positioning error in the target map area established in advance, as the first positioning pose positioning accuracy.
  • the accuracy determination module is specifically used for:
  • g 0 ( ⁇ x, ⁇ y) a 0 ⁇ x 2 +b 0 ⁇ x ⁇ y+c 0 ⁇ y 2 +d 0 ⁇ x+e 0 ⁇ y+f 0
  • the a 0 , b 0 , c 0 , d 0 , e 0 , and f 0 are predetermined function coefficients
  • the processor further includes: a relationship establishment module; the relationship establishment module is configured to adopt the following operations to establish the correspondence between the mapping error and the positioning error in the target map area:
  • mapping error function When the residual error between the mapping error function and the perturbation mapping errors corresponding to the multiple perturbed positioning poses is minimized based on the preset mapping error function related to the positioning error in the target map area
  • mapping error function of to obtain the functional relationship between the mapping error and the positioning error in the target map area.
  • the method includes:
  • the processor further includes:
  • the average quantity determination module is configured to determine the average quantity of target road features corresponding to the target map area according to the predetermined average quantity of road features corresponding to each map area after determining the target map area where the first positioning pose is located ;
  • a recognition amount determination module configured to determine the recognition road feature amount corresponding to the road image according to the proportion of the first road feature in the road image
  • the quality determination module is configured to determine the positioning quality for the first positioning pose according to the size relationship between the identified road feature quantity and the target road feature average quantity.
  • the processor further includes:
  • the failure determination module is used to obtain the positioning quality and positioning accuracy corresponding to a continuous preset number of road image frames; when the positioning quality corresponding to the preset number of road image frames is less than the preset positioning quality, and the preset number When the positioning accuracy corresponding to each road image frame is less than the preset positioning accuracy, it is determined that the visual positioning based on the road image is invalid.
  • the processor further includes: a visual positioning module, configured to perform vehicle positioning according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map, to obtain the first road feature of the vehicle. Positioning pose;
  • the visual positioning module is specifically used for:
  • the estimated pose of the vehicle is adjusted, and the estimated pose based on the vehicle is executed to determine the difference between the first road feature and the second road feature Steps of reference mapping error between;
  • the first positioning pose of the vehicle is determined according to the current estimated pose of the vehicle.
  • mapping determining module is specifically used for:
  • the mapping of the first road feature to the first mapping position in the preset map calculates the first An error between the mapping position and the position of the second road feature in the preset map to obtain the first mapping error
  • the visual positioning effect self-checking method and the vehicle-mounted terminal provided by the embodiments of the present invention can determine the road features in the road image and the road in the preset map when the first positioning pose of the vehicle is obtained based on the visual positioning.
  • the first mapping error between features and determine the target map area where the first positioning pose is located, and determine the positioning error corresponding to the first mapping error according to the pre-established correspondence between the mapping error and the positioning error in the target map area .
  • the embodiment of the present invention can determine the positioning error, that is, the positioning accuracy according to the mapping error in the visual positioning, and can realize the self-check of the visual positioning effect.
  • mapping error The correspondence relationship between the mapping error and the positioning error of the road features in different map areas is established in advance.
  • the positioning error can be determined according to the mapping error and the corresponding relationship, providing an implementable way .
  • mapping error and the positioning error When establishing the correspondence between the mapping error and the positioning error, first obtain the sample road feature corresponding to an image frame and the road feature successfully matched in the preset map, and the standard positioning pose corresponding to the image frame. Based on the standard positioning pose, multiple disturbances are added, and based on the established residual function, the corresponding relationship in the map area is solved. In this way, the correspondence between different map areas can be established more quickly, and it also provides an implementable way for determining the positioning error of the vehicle.
  • the quality of the road features in the road image can be evaluated, such as whether there is occlusion in the road image or whether the equipment is faulty, etc. Abnormal conditions, which can then evaluate the positioning quality.
  • FIG. 1 is a schematic flowchart of a method for self-checking visual positioning effect according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a process for establishing the correspondence between mapping errors and positioning errors according to an embodiment of the present invention
  • Fig. 3 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention.
  • visual positioning can be used, or a combination of visual positioning and other sensor data positioning .
  • the application scenario of visual positioning may be in a parking lot or in other places, which is not limited in the present invention.
  • the parking lot can be an indoor parking lot or an underground parking lot.
  • the embodiment of the present invention takes the application of visual positioning in the parking lot as an example for description.
  • the embodiment of the invention discloses a visual positioning effect self-checking method and a vehicle-mounted terminal, which can realize the evaluation of the visual positioning effect.
  • the embodiments of the present invention will be described in detail below.
  • FIG. 1 is a schematic flowchart of a method for self-checking visual positioning effect provided by an embodiment of the present invention. This method is applied to electronic equipment.
  • the electronic device may be an ordinary computer, a server, or an intelligent terminal device, etc., and may also be an in-vehicle terminal such as an in-vehicle computer or an in-vehicle industrial control computer (IPC).
  • IPC industrial control computer
  • Step S110 Perform vehicle positioning according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map, and when the first positioning pose of the vehicle is obtained, obtain the The second road feature whose road feature matches successfully.
  • the road image may be an image collected by a camera module installed in the vehicle.
  • the road image contains the surrounding road features and background parts when the vehicle is driving.
  • Road features include but are not limited to lane lines, street light poles, traffic signs, edge lines, stop lines, traffic lights, and other signs on the road.
  • Edge lines include, but are not limited to, lane edge lines and parking space edge lines.
  • the preset map may be a high-precision map established in advance.
  • the preset map may include the road features of each location point.
  • the location points in the preset map can be represented by two-dimensional coordinate points or three-dimensional coordinate points.
  • An application scenario of this embodiment is that when a vehicle is driving, after acquiring a road image by the camera module, the first road feature is detected from the road image, and the first road image is combined with a preset map. The road image in the map is matched, and the successfully matched road feature in the preset map is used as the second road feature. According to the first road feature and the second road feature, the current vehicle positioning pose can be determined as the first positioning pose.
  • the aforementioned road image may be one of multiple road image frames collected by the camera module.
  • the positioning pose includes information such as the position point coordinates and the vehicle heading angle in the preset map.
  • the execution timing of this embodiment may be that the self-check method provided in this embodiment is executed every time the first positioning pose is updated during the visual positioning process, or it may be at other times, such as after a long period of time. Perform the self-check method of this embodiment.
  • Step S120 Determine the first mapping error between the first road feature and the second road feature.
  • the first road feature is the road feature in the road image, and the location in the road image is used to represent it.
  • the second road feature is the road feature in the preset map, which is represented by the coordinates in the coordinate system where the preset map is located.
  • the first road feature and the second road feature may be mapped to the same coordinate system to determine the mapping error.
  • This step may specifically include the following implementation manners:
  • the first embodiment according to the first positioning pose and the position of the first road feature in the road image, calculate the first road feature to be mapped to the first mapped position in the preset map; calculate the first mapped position and the second road feature The error between the positions in the preset map is the first mapping error.
  • the positions of the first road feature and the second road feature are compared to obtain the first mapping error.
  • the first positioning pose and the position of the first road feature in the road image when calculating the mapping of the first road feature to the first mapping position in the preset map, it can be specifically based on the difference between the image coordinate system and the world coordinate system.
  • the conversion relationship and the first positioning pose convert the position of the first road feature in the road image to the world coordinate system to obtain the first mapping position.
  • the image coordinate system is the coordinate system where the road image is located
  • the world coordinate system is the coordinate system where the preset map is located.
  • the conversion relationship between the image coordinate system and the world coordinate system can be obtained through the internal parameter matrix between the image coordinate system and the camera coordinate system, and the rotation matrix and the translation matrix between the camera coordinate system and the world coordinate system.
  • the second road feature is calculated to be mapped to the second mapping position in the coordinate system of the road image; the first road feature is calculated on the road
  • the error between the position in the image and the second mapping position is the first mapping error.
  • the positions of the first road feature and the second road feature are compared to obtain the first mapping error.
  • the first positioning pose and the position of the second road feature in the preset map when calculating the second road feature mapping to the second mapping position in the coordinate system of the road image, it can be based on the difference between the image coordinate system and the world coordinate system.
  • the conversion relationship between the two, and the first positioning pose convert the position of the second road feature in the preset map to the image coordinate system to obtain the second mapping position.
  • Step S130 Determine the target map area where the first positioning pose is located from among multiple different map areas included in the preset map.
  • the preset map may be divided into a plurality of different map areas according to the road features contained in the preset map in advance, and the road features in each map area have relevance or location similarity.
  • the map area can be a circular area, a rectangular area, or other area shapes.
  • the map area where the position coordinates in the first positioning pose are located may be specifically determined as the target map area.
  • Step S140 Determine the first positioning error corresponding to the first mapping error as the positioning accuracy of the first positioning pose according to the pre-established correspondence between the mapping error and the positioning error in the target map area.
  • the correspondence between the mapping error and the positioning error in each different map area can be established in advance, and from the correspondence between the mapping error and the positioning error in each different map area, the mapping error and the positioning error in the target map area can be determined Correspondence between positioning errors.
  • the corresponding relationship between the mapping error and the positioning error can be represented by a mapping error function with the positioning error as a variable.
  • the first mapping error may be substituted into the mapping error function to obtain the first positioning error corresponding to the first mapping error.
  • the positioning error can be understood as the difference between the current positioning pose and the real positioning pose, and it can also indicate the accuracy of the positioning pose.
  • the positioning error can be 5cm, 10cm, etc. The greater the positioning error, the smaller the positioning accuracy, and the smaller the positioning error, the greater the positioning accuracy.
  • the mapping method used when determining the first mapping error in step S120 should be the same mapping method used when establishing the correspondence between the mapping error and the positioning error.
  • this embodiment can determine the first mapping error between the road feature in the road image and the road feature in the preset map when the first positioning pose of the vehicle is obtained based on the visual positioning, and the first The target map area where the positioning pose is located is determined, and the positioning error corresponding to the first mapping error is determined according to the pre-established correspondence between the mapping error and the positioning error in the target map area.
  • This embodiment can determine the positioning error, that is, the positioning accuracy according to the mapping error in the visual positioning, and can realize the self-check on the visual positioning effect.
  • step S140 the first positioning corresponding to the first mapping error is determined according to the pre-established correspondence between the mapping error and the positioning error in the target map area Error steps can include:
  • g 0 ( ⁇ x, ⁇ y) a 0 ⁇ x 2 +b 0 ⁇ x ⁇ y+c 0 ⁇ y 2 +d 0 ⁇ x+e 0 ⁇ y+f 0
  • a 0 , b 0 , c 0 , d 0 , e 0 , and f 0 are predetermined function coefficients
  • mapping error function corresponding to different map regions are different, and specifically may be different function coefficients.
  • the first mapping error cost can be understood as a plane
  • the first mapping error cost is substituted into the mapping error function g 0 , which is to find the intersection point of the paraboloid and the plane.
  • the intersection point is an ellipse
  • the points on the ellipse are the positioning errors ( ⁇ x, ⁇ y) obtained by the solution.
  • the maximum value of the multiple positioning errors obtained by the solution is the major axis and minor axis (x err and y err ) of the ellipse.
  • this embodiment provides a specific implementation manner for determining the first positioning error corresponding to the first mapping error according to the mapping error function, and the method is easier to implement in practical applications.
  • the following steps S210 to S240 may be used to establish the correspondence between the mapping error and the positioning error in the target map area, as shown in FIG. 2.
  • Step S210 Obtain sample road images and corresponding sample road features collected in the target map area, as well as the standard positioning poses of the vehicles corresponding to the sample road images, and obtain the third road feature in the preset map that successfully matches the sample road features .
  • the above-mentioned standard positioning pose is the positioning pose of the vehicle determined when the camera module collects sample road images, and the standard positioning pose can be understood as a positioning pose without positioning errors.
  • Step S220 Add multiple different perturbations to the standard positioning pose to obtain multiple perturbed positioning poses.
  • the perturbation positioning pose can be understood as the virtual positioning pose of the vehicle based on the standard positioning pose.
  • Step S230 Determine the disturbance mapping errors corresponding to the multiple disturbance positioning poses according to the sample road feature and the third road feature.
  • the disturbance mapping error may be determined after the sample road feature and the third road feature are mapped to the same coordinate system according to the mapping method mentioned in step S120.
  • This step can include the following embodiments:
  • the sample road feature is calculated to be mapped to the third mapping position in the preset map, and the third mapping position and the first mapping position are calculated.
  • Three errors between the positions of the road features in the preset map to obtain the disturbance mapping error or,
  • the third road feature is calculated to be mapped to the fourth mapping position in the coordinate system where the sample road image is located, and the first Four errors between the mapping position and the position of the sample road feature in the sample road image are used to obtain the disturbance mapping error.
  • mapping error match_err When the road features in the road image and the successfully matched road features in the preset map, and the corresponding positioning pose are known, the mapping error match_err can be expressed by the following function:
  • match_err MapMatching(p pose ,I seg ,I map )
  • p pose is the positioning pose
  • Iseg is the road feature in the road image
  • I map is the road feature that is successfully matched in the preset map.
  • Step S240 Based on the mapping error function related to the positioning error in the preset target map area, the mapping error when the residual error between the mapping error function and the disturbance mapping errors corresponding to the multiple disturbance positioning poses is minimized Function to obtain the functional relationship between the mapping error and the positioning error in the target map area.
  • the preset mapping error function related to the positioning error in the target map area can be understood as a preset mapping error function containing an unknown quantity.
  • the mapping error function can be set to the following quadric form:
  • the perturbation mapping error corresponding to multiple perturbation positioning poses can be expressed by the following function:
  • match_err MapMatching(p gt + ⁇ p,I seg ,I map )
  • This step can include:
  • the solved g 0 should be a parabola.
  • MapMatching(p gt + ⁇ p,I seg ,I map ) is the location of multiple disturbances Perturbation mapping error corresponding to the pose p gt + ⁇ p.
  • g( ⁇ x, ⁇ y)-MapMatching(p gt + ⁇ p,I seg ,I map ) represents the residual error between the mapping error function and the disturbance mapping errors corresponding to multiple disturbance positioning poses.
  • is the norm symbol.
  • mapping error function g For each map area in the preset map, the corresponding mapping error function g can be obtained by the above-mentioned method.
  • mapping error and the positioning error when establishing the correspondence between the mapping error and the positioning error, first obtain the sample road feature corresponding to an image frame and the road feature that is successfully matched in the preset map, and the standard corresponding to the image frame To locate the pose, add multiple disturbances on the basis of the standard positioning pose, and solve the corresponding relationship in the map area based on the established residual function. In this way, the correspondence between different map areas can be established more quickly, and it also provides an implementable way for determining the positioning error of the vehicle.
  • the method may further include the following Steps 1a to 3a.
  • Step 1a Determine the average value of target road features corresponding to the target map area according to the predetermined average value of road features corresponding to each map area.
  • the average amount of road features can be understood as the average amount of the proportion of road features in the normal road image.
  • multiple normal road images in the map area can be collected in advance through the camera module in the vehicle, and the normal ratio occupied by the road feature can be determined from each normal road image, and the corresponding map area can be obtained according to each normal ratio.
  • the average amount of road features can be used to determine the normal ratio occupied by the road feature.
  • Determining the normal ratio occupied by the road features from the normal road image may include: determining the ratio of the pixels occupied by the road features to the total pixels of the normal road image as the normal ratio occupied by the road features; or, corresponding to the road features The ratio of the area to the total area of the normal road image is determined as the normal ratio occupied by the road features.
  • a normal road image can be understood as a road image that is collected when there is no obstruction by other objects in the image collection area of the camera module, and the camera module is not faulty.
  • the road features in the normal road image can be understood as the road features determined in the ideal state.
  • Step 2a According to the proportion of the first road feature in the road image, determine the identified road feature amount corresponding to the road image.
  • the ratio of the first road feature in the road image can be directly determined as the identified road feature amount corresponding to the road image; or the value after performing preset processing on the ratio of the first road feature in the road image , It is determined as the identified road feature quantity corresponding to the road image.
  • This step can determine the proportion of the first road feature in the road image, which can specifically include: the ratio of the pixels occupied by the first road feature to the total pixels of the road image, or the area corresponding to the first road feature and the road image The ratio of the total area of is determined as the ratio of the first road feature in the road image.
  • Step 3a Determine the positioning quality for the first positioning pose according to the size relationship between the identified road feature quantity and the target road feature average quantity.
  • This step may specifically include: judging whether the difference between the target road feature average value and the identified road feature value is less than the preset feature value difference, if so, determining that the positioning quality for the first positioning pose is good; if not , It is determined that the positioning quality for the first positioning pose is poor.
  • different intervals may be set according to the average value of target road features in advance, and different intervals may be set to correspond to different positioning quality values.
  • the target positioning quality value corresponding to the identified road characteristic may be determined according to different positioning quality values corresponding to different intervals. This method can be more refined and more quantified positioning quality.
  • the positioning quality for the first positioning pose is good, it is considered that there is no occlusion in the image acquisition area of the current camera, the effective information in the image is more, and the effectiveness of visual positioning is better.
  • the positioning quality for the first positioning pose is poor, it is considered that there may be obstructions in the image acquisition area of the current camera, or the device may be malfunctioning, there is less effective information in the image, and the effectiveness of visual positioning is poor.
  • the quality of the road features in the road image can be evaluated, for example, whether there is occlusion in the road image. Or whether the equipment has abnormal conditions such as failure, and then can evaluate the positioning quality, which provides a richer evaluation index for the evaluation of visual positioning.
  • the method may further include:
  • the positioning quality corresponding to the preset number of road image frames is not less than the preset positioning quality, and the positioning accuracy corresponding to the preset number of road image frames is not less than the preset positioning accuracy, it can be determined that the visual positioning effect based on the road image is better .
  • the effect of visual positioning can be comprehensively judged, and the failure of visual positioning can be judged more accurately, so that the device can take effective countermeasures in time when the visual positioning fails to improve vehicle positioning The stability.
  • step S110 vehicle positioning is performed according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map to obtain
  • the step of the first positioning of the vehicle may include the following steps 1b to 4b.
  • Step 1b Determine the estimated pose of the vehicle.
  • the estimated pose may be determined according to the last positioning pose of the vehicle.
  • the last positioning pose may be directly determined as the estimated pose, or the pose after a preset transformation of the last positioning pose may be used as the estimated pose.
  • the step of performing vehicle positioning according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map, and obtaining the first positioning pose of the vehicle can be implemented at a preset frequency .
  • Step 2b Determine the reference mapping error between the first road feature and the second road feature based on the estimated pose of the vehicle.
  • step S120 When determining the reference mapping error in this step, one of the two mapping methods provided in step S120 may be referred to, and the first road feature and the second road feature are mapped to the same coordinate system to determine the reference mapping error between the two.
  • Step 3b When the reference mapping error is greater than the preset error threshold, adjust the estimated pose of the vehicle, and perform step 2b based on the estimated pose of the vehicle to determine the reference mapping error between the first road feature and the second road feature step.
  • Step 4b When the reference mapping error is not greater than the preset error threshold, determine the first positioning pose of the vehicle according to the current estimated pose of the vehicle.
  • the reference mapping error is not greater than the preset error threshold, it is considered that the estimated pose is very close to the actual positioning pose of the vehicle, and the positioning accuracy has reached the requirement.
  • this embodiment provides the matching result between the road feature based on the road image and the road feature in the preset map, and the positioning pose of the vehicle can be determined in an iterative manner, which can more accurately determine the positioning pose of the vehicle .
  • the road image can be converted to the top view coordinate system to obtain the ground image; the ground image is binarized to obtain the processed image; according to the information in the processed image, determine Road characteristics of the road image.
  • the ground image can be a grayscale image.
  • the Otsu method can be used to determine the pixel threshold used to distinguish the foreground and background part of the ground image, and the ground image is binarized according to the determined pixel threshold to obtain the processed foreground part image.
  • the processed image can be directly used as the road feature, or the relative position information between the various landmarks in the processed image can be used as the road feature.
  • Fig. 3 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention.
  • the vehicle-mounted terminal includes a processor 310 and an image acquisition device 320.
  • the processor 310 includes: a feature acquisition module, a mapping determination module, an area determination module, and an accuracy determination module. (Not shown in the picture)
  • the image acquisition device 320 is used to acquire road images
  • the feature acquisition module is used to locate the vehicle according to the matching result between the first road feature in the road image and the road feature established in advance in the preset map, and when the first positioning pose of the vehicle is obtained, the preset map is acquired The second road feature that successfully matches the first road feature;
  • a mapping determination module configured to determine the first mapping error between the first road feature and the second road feature
  • the area determination module is used to determine the target map area where the first positioning pose is located from among multiple different map areas included in the preset map;
  • the accuracy determination module is used to determine the first positioning error corresponding to the first mapping error as the positioning accuracy of the first positioning pose according to the pre-established correspondence between the mapping error and the positioning error in the target map area.
  • the accuracy determination module is specifically used for:
  • g 0 ( ⁇ x, ⁇ y) a 0 ⁇ x 2 +b 0 ⁇ x ⁇ y+c 0 ⁇ y 2 +d 0 ⁇ x+e 0 ⁇ y+f 0
  • a 0 , b 0 , c 0 , d 0 , e 0 , and f 0 are predetermined function coefficients
  • the processor 310 further includes: a relationship establishment module (not shown in the figure); a relationship establishment module for establishing a mapping in the target map area by using the following operations Correspondence between error and positioning error:
  • mapping error function when the residual error between the mapping error function and the disturbance mapping errors corresponding to multiple disturbance positioning poses is minimized is obtained, The functional relationship between the mapping error and the positioning error in the target map area.
  • the relationship establishment module solves the mapping error when the residual error between the mapping error function and the disturbance mapping errors corresponding to the multiple disturbance positioning poses is minimized Functions include:
  • MapMatching(p gt + ⁇ p,I seg ,I map ) is the location of multiple disturbances Perturbation mapping error corresponding to the pose p gt + ⁇ p.
  • the processor 310 further includes:
  • the average amount determination module (not shown in the figure) is used to determine the target road corresponding to the target map area according to the predetermined average amount of road features corresponding to each map area after determining the target map area where the first positioning pose is located Feature average
  • the recognition amount determination module (not shown in the figure) is used to determine the recognition road feature amount corresponding to the road image according to the proportion of the first road feature in the road image;
  • the quality determination module (not shown in the figure) is used to determine the positioning quality for the first positioning pose according to the size relationship between the identified road feature quantity and the target road feature average quantity.
  • the processor 310 further includes:
  • the failure determination module (not shown in the figure) is used to obtain the positioning quality and positioning accuracy corresponding to a continuous preset number of road image frames; when the positioning quality corresponding to the preset number of road image frames is less than the preset positioning quality, and When the positioning accuracy corresponding to the preset number of road image frames is less than the preset positioning accuracy, it is determined that the visual positioning based on the road image is invalid.
  • the processor 310 further includes: a visual positioning module (not shown in the figure), which is configured to compare the first road feature in the road image with the preset map Carry out vehicle positioning based on the matching result between the road features established in advance, and obtain the first positioning pose of the vehicle;
  • a visual positioning module (not shown in the figure), which is configured to compare the first road feature in the road image with the preset map Carry out vehicle positioning based on the matching result between the road features established in advance, and obtain the first positioning pose of the vehicle;
  • Visual positioning module specifically used for:
  • the first positioning pose of the vehicle is determined according to the current estimated pose of the vehicle.
  • mapping determination module is specifically configured to:
  • the first positioning pose and the position of the first road feature in the road image calculate the first road feature to be mapped to the first mapping position in the preset map; calculate the first mapping position and the second road feature in the preset map The error between the positions in, get the first mapping error; or,
  • This embodiment of the terminal and the embodiment of the method shown in FIG. 1 are embodiments based on the same inventive concept, and relevant points may be referred to each other.
  • the foregoing terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment. For specific description, refer to the method embodiment.
  • modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes.
  • the modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manufacturing & Machinery (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开一种视觉定位效果自检方法及车载终端。该方法包括:在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取所述预设地图中与所述第一道路特征匹配成功的第二道路特征;确定所述第一道路特征与所述第二道路特征之间的第一映射误差;从所述预设地图包含的多个不同地图区域中,确定所述第一定位位姿所在的目标地图区域;根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差,作为所述第一定位位姿的定位精度。应用本发明实施例提供的方案,能够实现对视觉定位效果的评估。

Description

一种视觉定位效果自检方法及车载终端 技术领域
本发明涉及智能驾驶技术领域,具体而言,涉及一种视觉定位效果自检方法及车载终端。
背景技术
在智能驾驶技术领域中,对车辆进行定位是智能驾驶中的重要环节。通常,当车辆在户外行驶时,可以根据全球导航卫星系统(Global Navigation Satellite System,GNSS)和惯性测量单元(Inertial Measurement Unit,IMU)采集的数据,经过综合定位后确定车辆精确的定位位姿。当车辆行驶至卫星定位信号较弱或无信号的停车场中时,为了精确地确定车辆的定位位姿,可以采用视觉定位与IMU结合的方式。
其中,在采用视觉定位时,通常可以预先建立高精度地图与停车场中的道路特征之间的对应关系,当相机模块采集到道路图像时,将道路图像中的道路特征与高精度地图中的道路特征进行匹配,根据匹配结果确定车辆在视觉定位方面的定位位姿。通过将视觉定位与IMU推测的轨迹进行结合,能够得到车辆更精确的定位位姿。但是,在实际应用中,道路图像中的道路特征被遮挡或者设备出现故障等原因,均可能会导致视觉定位的定位结果非常不准确。因此,亟待一种对视觉定位效果自检的方法。
发明内容
本发明提供了一种视觉定位效果自检方法及车载终端,以实现对视觉定位效果的评估。具体的技术方案如下。
第一方面,本发明实施例提供了一种视觉定位效果自检方法,包括:
在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取所述预设地图中与所述第一道路特征匹配成功的第二道路特征;
确定所述第一道路特征与所述第二道路特征之间的第一映射误差;
从所述预设地图包含的多个不同地图区域中,确定所述第一定位位姿所在的目标地图区域;
根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差,作为所述第一定位位姿的定位精度。
可选的,所述根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差的步骤,包括:
将所述第一映射误差cost代入以下预先建立的目标地图区域中的映射误差函数g 0,求解得到多个定位误差(Δx,Δy):
g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0
其中,所述a 0、b 0、c 0、d 0、e 0、f 0为预先确定的函数系数;
将求解得到的多个定位误差中的最大值确定为与所述第一映射误差对应的第一定位误差r:
Figure PCTCN2019113491-appb-000001
其中,
Figure PCTCN2019113491-appb-000002
Figure PCTCN2019113491-appb-000003
Figure PCTCN2019113491-appb-000004
C=2(a 0e 0 2+c 0d 0 2+(f 0-cost)b 0 2-2b 0d 0e 0-a 0c 0(f 0-cost))。
可选的,采用以下方式建立目标地图区域中映射误差与定位误差之间的对应关系:
获取在所述目标地图区域中采集的样本道路图像和对应的样本道路特征,以及所述样本道路图像 对应的所述车辆的标准定位位姿,获取所述预设地图中与所述样本道路特征匹配成功的第三道路特征;
对所述标准定位位姿增加多个不同的扰动量,得到多个扰动定位位姿;
根据所述样本道路特征和第三道路特征,确定多个扰动定位位姿对应的扰动映射误差;
基于预先设定的所述目标地图区域中的与定位误差相关的映射误差函数,求解所述映射误差函数与所述多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数,得到所述目标地图区域中映射误差与定位误差之间的函数关系。
可选的,所述求解所述映射误差函数与所述多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数的步骤,包括:
求解以下最小值函数
Figure PCTCN2019113491-appb-000005
得到a 0、b 0、c 0、d 0、e 0和f 0,将求解得到的所述a 0、b 0、c 0、d 0、e 0和f 0代入g后的函数作为映射误差函数;
其中,所述映射误差函数为g(Δx,Δy),g(Δx,Δy)=aΔx 2+bΔxΔy+cΔy 2+dΔx+eΔy+f;所述p gt为所述标准定位位姿,所述扰动量为Δp={Δx,Δy,0},Δx,Δy∈Ω,所述Ω为所述目标地图区域,所述I seg为所述样本道路特征,所述I map为所述第三道路特征;所述MapMatching(p gt+Δp,I seg,I map)为多个扰动定位位姿p gt+Δp对应的扰动映射误差。
可选的,在确定所述第一定位位姿所在的目标地图区域之后,所述方法还包括:
根据预先确定的各个地图区域对应的道路特征平均量,确定所述目标地图区域对应的目标道路特征平均量;
根据所述第一道路特征在所述道路图像中占据的比例,确定所述道路图像对应的识别道路特征量;
根据所述识别道路特征量与所述目标道路特征平均量之间的大小关系,确定针对所述第一定位位姿的定位质量。
可选的,所述方法还包括:
获取连续的预设数量个道路图像帧对应的定位质量和定位精度;
当所述预设数量个道路图像帧对应的定位质量小于预设定位质量,并且所述预设数量个道路图像帧对应的定位精度小于预设定位精度时,确定基于道路图像的视觉定位失效。
可选的,所述根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿的步骤,包括:
确定所述车辆的估计位姿;
基于所述车辆的估计位姿,确定所述第一道路特征与所述第二道路特征之间的参考映射误差;
当所述参考映射误差大于预设误差阈值时,调整所述车辆的估计位姿,并执行所述基于所述车辆的估计位姿,确定所述第一道路特征与所述第二道路特征之间的参考映射误差的步骤;
当所述参考映射误差不大于所述预设误差阈值时,根据所述车辆的当前估计位姿确定所述车辆的第一定位位姿。
可选的,所述确定所述第一道路特征与所述第二道路特征之间的第一映射误差的步骤,包括:
根据所述第一定位位姿,以及所述第一道路特征在所述道路图像中的位置,计算所述第一道路特征映射至所述预设地图中的第一映射位置;计算所述第一映射位置与所述第二道路特征在所述预设地图中的位置之间的误差,得到第一映射误差;或者,
根据所述第一定位位姿,以及所述第二道路特征在所述预设地图中的位置,计算所述第二道路特征映射至所述道路图像所在坐标系中的第二映射位置;计算所述第一道路特征在所述道路图像中的位置与所述第二映射位置之间的误差,得到第一映射误差。
第二方面,本发明实施例提供了一种车载终端,包括:处理器和图像采集设备;所述处理器包括: 特征获取模块、映射确定模块、区域确定模块和精度确定模块;
所述图像采集设备,用于采集道路图像;
所述特征获取模块,用于在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取所述预设地图中与所述第一道路特征匹配成功的第二道路特征;
所述映射确定模块,用于确定所述第一道路特征与所述第二道路特征之间的第一映射误差;
所述区域确定模块,用于从所述预设地图包含的多个不同地图区域中,确定所述第一定位位姿所在的目标地图区域;
所述精度确定模块,用于根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差,作为所述第一定位位姿的定位精度。
可选的,所述精度确定模块,具体用于:
将所述第一映射误差cost代入以下预先建立的目标地图区域中的映射误差函数g 0,求解得到多个定位误差(Δx,Δy):
g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0
其中,所述a 0、b 0、c 0、d 0、e 0、f 0为预先确定的函数系数;
将求解得到的多个定位误差中的最大值确定为与所述第一映射误差对应的第一定位误差r:
Figure PCTCN2019113491-appb-000006
其中,
Figure PCTCN2019113491-appb-000007
Figure PCTCN2019113491-appb-000008
Figure PCTCN2019113491-appb-000009
C=2(a 0e 0 2+c 0d 0 2+(f 0-cost)b 0 2-2b 0d 0e 0-a 0c 0(f 0-cost))。
可选的,所述处理器还包括:关系建立模块;所述关系建立模块,用于采用以下操作建立目标地图区域中映射误差与定位误差之间的对应关系:
获取在所述目标地图区域中采集的样本道路图像和对应的样本道路特征,以及所述样本道路图像对应的所述车辆的标准定位位姿,获取所述预设地图中与所述样本道路特征匹配成功的第三道路特征;
对所述标准定位位姿增加多个不同的扰动量,得到多个扰动定位位姿;
根据所述样本道路特征和第三道路特征,确定多个扰动定位位姿对应的扰动映射误差;
基于预先设定的所述目标地图区域中的与定位误差相关的映射误差函数,求解所述映射误差函数与所述多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数,得到所述目标地图区域中映射误差与定位误差之间的函数关系。
可选的,所述关系建立模块,求解所述映射误差函数与所述多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数时,包括:
求解以下最小值函数
Figure PCTCN2019113491-appb-000010
得到a 0、b 0、c 0、d 0、e 0和f 0,将求解得到的所述a 0、b 0、c 0、d 0、e 0和f 0代入g后的函数作为映射误差函数;
其中,所述映射误差函数为g(Δx,Δy),g(Δx,Δy)=aΔx 2+bΔxΔy+cΔy 2+dΔx+eΔy+f;所述p gt为所述标准定位位姿,所述扰动量为Δp={Δx,Δy,0},Δx,Δy∈Ω,所述Ω为所述目标地图区域,所述I seg为所述样本道路特征,所述I map为所述第三道路特征;所述MapMatching(p gt+Δp,I seg,I map)为多个扰动定位位姿p gt+Δp对应的扰动映射误差。
可选的,所述处理器还包括:
平均量确定模块,用于在确定所述第一定位位姿所在的目标地图区域之后,根据预先确定的各个地图区域对应的道路特征平均量,确定所述目标地图区域对应的目标道路特征平均量;
识别量确定模块,用于根据所述第一道路特征在所述道路图像中占据的比例,确定所述道路图像对应的识别道路特征量;
质量确定模块,用于根据所述识别道路特征量与所述目标道路特征平均量之间的大小关系,确定针对所述第一定位位姿的定位质量。
可选的,所述处理器还包括:
失效确定模块,用于获取连续的预设数量个道路图像帧对应的定位质量和定位精度;当所述预设数量个道路图像帧对应的定位质量小于预设定位质量,并且所述预设数量个道路图像帧对应的定位精度小于预设定位精度时,确定基于道路图像的视觉定位失效。
可选的,所述处理器还包括:视觉定位模块,用于根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿;
所述视觉定位模块,具体用于:
确定所述车辆的估计位姿;
基于所述车辆的估计位姿,确定所述第一道路特征与所述第二道路特征之间的参考映射误差;
当所述参考映射误差大于预设误差阈值时,调整所述车辆的估计位姿,并执行所述基于所述车辆的估计位姿,确定所述第一道路特征与所述第二道路特征之间的参考映射误差的步骤;
当所述参考映射误差不大于所述预设误差阈值时,根据所述车辆的当前估计位姿确定所述车辆的第一定位位姿。
可选的,所述映射确定模块,具体用于:
根据所述第一定位位姿,以及所述第一道路特征在所述道路图像中的位置,计算所述第一道路特征映射至所述预设地图中的第一映射位置;计算所述第一映射位置与所述第二道路特征在所述预设地图中的位置之间的误差,得到第一映射误差;或者,
根据所述第一定位位姿,以及所述第二道路特征在所述预设地图中的位置,计算所述第二道路特征映射至所述道路图像所在坐标系中的第二映射位置;计算所述第一道路特征在所述道路图像中的位置与所述第二映射位置之间的误差,得到第一映射误差。
由上述内容可知,本发明实施例提供的视觉定位效果自检方法及车载终端,可以在基于视觉定位得到车辆的第一定位位姿时,确定道路图像中的道路特征与预设地图中的道路特征之间的第一映射误差,并确定第一定位位姿所在的目标地图区域,根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定第一映射误差对应的定位误差。本发明实施例能够根据视觉定位中的映射误差,确定定位误差即定位精度,能够实现对视觉定位效果的自检。
本发明实施例的创新点包括:
1、预先建立不同地图区域中道路特征的映射误差与定位误差之间的对应关系,当车辆在基于视觉进行定位时,可以根据映射误差和该对应关系确定定位误差,提供了一种可实施方式。
2、在建立映射误差与定位误差之间的对应关系时,首先得到一个图像帧对应的样本道路特征和预设地图中匹配成功的道路特征,以及该图像帧对应的标准定位位姿,在该标准定位位姿的基础上增加多个扰动量,基于建立的残差函数,求解得到该地图区域中的对应关系。这样能够更快速地建立不同地图区域中的对应关系,也为确定车辆的定位误差提供了可实施的方式。
3、根据道路图像中的道路特征与该目标地图区域对应的道路特征平均量之间的大小关系,能够评估道路图像中道路特征的质量,例如评估道路图像中是否有遮挡或者设备是否存在故障等异常情况,进而能够评估定位质量。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所 需要使用的附图作简单介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的视觉定位效果自检方法的一种流程示意图;
图2为本发明实施例提供的建立映射误差与定位误差之间的对应关系的一种流程示意图;
图3为本发明实施例提供的车载终端的一种结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含的一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
当车辆行驶至卫星定位信号较弱或无信号的停车场或其他场所中时,为了精确地确定车辆的定位位姿,可以采用视觉定位的方式,或者视觉定位与其他传感器数据定位相结合的方式。视觉定位的应用场景可以是在停车场中,也可以是在其他场所中,本发明对此不作限定。其中,停车场可以为室内停车场或地下车库,本发明实施例以视觉定位应用在停车场内为例进行说明。
本发明实施例公开了一种视觉定位效果自检方法及车载终端,能够实现对视觉定位效果的评估。下面对本发明实施例进行详细说明。
图1为本发明实施例提供的视觉定位效果自检方法的一种流程示意图。该方法应用于电子设备。该电子设备可以为普通计算机、服务器或者智能终端设备等,也可以为车载电脑或车载工业控制计算机(Industrial personal Computer,IPC)等车载终端。该方法具体包括以下步骤。
步骤S110:在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取预设地图中与第一道路特征匹配成功的第二道路特征。
其中,道路图像可以为设置在车辆中的相机模块采集的图像。道路图像包含车辆行驶时周围的道路特征和背景部分。道路特征包括但不限于道路上的车道线、路灯杆、交通牌、边缘线、停止线、红绿灯和地面的其他标识。边缘线包括但不限于车道边缘线和泊车位边缘线。
预设地图可以为预先建立的高精度地图。该预设地图中可以包括各个位置点的道路特征。预设地图中的位置点可以二维坐标点或三维坐标点表示。
本实施例的一种应用场景为,当车辆在行驶过程中,在获取到相机模块采集道路图像后,从道路图像中检测得到第一道路特征,并将所述第一道路图像与预设地图中的道路图像进行匹配,将预设地图中匹配成功的道路特征作为第二道路特征,根据第一道路特征和第二道路特征可以确定当前时刻车辆的定位位姿,作为第一定位位姿。
上述道路图像可以为相机模块采集的多个道路图像帧中的一个。定位位姿包括预设地图中的位置点坐标和车辆朝向角等信息。
本实施例的执行时机,可以是在视觉定位过程中,每次更新第一定位位姿时都执行本实施例提供的自检方法,也可以是在其他时候,比如在一段较长时间后再执行本实施例的自检方法。
步骤S120:确定第一道路特征与第二道路特征之间的第一映射误差。
其中,第一道路特征为道路图像中的道路特征,采用的是道路图像中的位置表示。第二道路特征为预设地图中的道路特征,采用的是预设地图所在坐标系中的坐标来表示。
在确定第一映射误差时,可以将第一道路特征和第二道路特征映射到同一坐标系中后确定映射误差。本步骤具体可以包括以下实施方式:
实施方式一,根据第一定位位姿,以及第一道路特征在道路图像中的位置,计算第一道路特征映射至预设地图中的第一映射位置;计算第一映射位置与第二道路特征在预设地图中的位置之间的误差,得到第一映射误差。
在本实施方式中,通过将第一道路特征映射到预设地图所在的坐标系中,对第一道路特征与第二道路特征的位置进行对比,得到第一映射误差。
根据第一定位位姿,以及第一道路特征在道路图像中的位置,计算第一道路特征映射至预设地图中的第一映射位置时,具体可以根据图像坐标系与世界坐标系之间的转换关系,以及第一定位位姿,将第一道路特征在道路图像中的位置转换至世界坐标系中,得到第一映射位置。其中,图像坐标系为道路图像所在的坐标系,世界坐标系为预设地图所在的坐标系。图像坐标系与世界坐标系之间的转换关系,可以通过图像坐标系与相机坐标系之间的内参矩阵,以及相机坐标系与世界坐标系之间的旋转矩阵和平移矩阵得到。
实施方式二,根据第一定位位姿,以及第二道路特征在预设地图中的位置,计算第二道路特征映射至道路图像所在坐标系中的第二映射位置;计算第一道路特征在道路图像中的位置与第二映射位置之间的误差,得到第一映射误差。
在本实施方式中,通过将第二道路特征映射到道路图像所在的坐标系中,对第一道路特征与第二道路特征的位置进行对比,得到第一映射误差。
根据第一定位位姿,以及第二道路特征在预设地图中的位置,计算第二道路特征映射至道路图像所在坐标系中的第二映射位置时,可以根据图像坐标系与世界坐标系之间的转换关系,以及第一定位位姿,将第二道路特征在预设地图中的位置转换至图像坐标系中,得到第二映射位置。
上述两种实施方式对应两种不同的映射方式,在实际应用中可以择一使用。
步骤S130:从预设地图包含的多个不同地图区域中,确定第一定位位姿所在的目标地图区域。
本实施例中,可以预先根据预设地图包含的道路特征,将预设地图划分成多个不同地图区域,每个地图区域中的道路特征之间具有关联性或者位置相近性。地图区域可以为圆形区域、矩形区域或其他区域形状。
在确定目标地图区域时,具体可以将第一定位位姿中的位置坐标所在的地图区域确定为目标地图区域。
步骤S140:根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定第一映射误差对应的第一定位误差,作为第一定位位姿的定位精度。
本实施例中,可以预先建立各个不同地图区域中映射误差与定位误差之间的对应关系,从各个不同地图区域中映射误差与定位误差之间的对应关系中,确定目标地图区域中映射误差与定位误差之间的对应关系。
其中,映射误差与定位误差之间的对应关系可以采用以定位误差为变量的映射误差函数表示。在确定第一映射误差对应的第一定位误差时,可以将第一映射误差代入映射误差函数,得到第一映射误差对应的第一定位误差。
定位误差可以理解为当前的定位位姿与真实的定位位姿之间的差值,也可以表示定位位姿的精度。例如定位误差可以为5cm、10cm等。定位误差越大,定位精度越小,定位误差越小,定位精度越大。
步骤S120中确定第一映射误差时采用的映射方式,应与在建立映射误差与定位误差之间的对应关系时采用相同的映射方式。
由上述内容可知,本实施例可以在基于视觉定位得到车辆的第一定位位姿时,确定道路图像中的道路特征与预设地图中的道路特征之间的第一映射误差,并确定第一定位位姿所在的目标地图区域,根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定第一映射误差对应的定位误差。本实施例能够根据视觉定位中的映射误差,确定定位误差即定位精度,能够实现对视觉定位效果的自检。
在本发明的另一实施例中,基于图1所示实施例,步骤S140,根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定第一映射误差对应的第一定位误差的步骤,可以包括:
将第一映射误差cost代入以下预先建立的目标地图区域中的映射误差函数g 0,求解得到多个定位误差(Δx,Δy):
g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0
其中,a 0、b 0、c 0、d 0、e 0、f 0为预先确定的函数系数;
将求解得到的多个定位误差中的最大值确定为与第一映射误差对应的第一定位误差r:
Figure PCTCN2019113491-appb-000011
其中,
Figure PCTCN2019113491-appb-000012
Figure PCTCN2019113491-appb-000013
Figure PCTCN2019113491-appb-000014
C=2(a 0e 0 2+c 0d 0 2+(f 0-cost)b 0 2-2b 0d 0e 0-a 0c 0(f 0-cost))。
本实施例中,不同地图区域对应的映射误差函数的表达形式不同,具体可以是函数系数不同。上述映射误差函数g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0为抛物面,第一映射误差cost可以理解为平面,将第一映射误差cost代入映射误差函数g 0,即是求抛物面与平面的交点。根据数学知识可知,该交点为椭圆,椭圆上的点都是求解得到的定位误差(Δx,Δy)。而求解得到的多个定位误差中的最大值即为椭圆的长轴和短轴(x err和y err)。
综上,本实施例提供了根据映射误差函数确定第一映射误差对应的第一定位误差的具体实施方式,本方法在实际应用中更易于实施。
在本发明的另一实施例中,基于图1所示实施例中,可以采用以下步骤S210~S240建立目标地图区域中映射误差与定位误差之间的对应关系,参见图2所示。
步骤S210:获取在目标地图区域中采集的样本道路图像和对应的样本道路特征,以及样本道路图像对应的车辆的标准定位位姿,获取预设地图中与样本道路特征匹配成功的第三道路特征。
其中,上述标准定位位姿为相机模块采集样本道路图像时确定的车辆的定位位姿,标准定位位姿可以理解为不存在定位误差的定位位姿。
步骤S220:对标准定位位姿增加多个不同的扰动量,得到多个扰动定位位姿。
扰动定位位姿可以理解为以标准定位位姿为基准得到的车辆的虚拟定位位姿。
步骤S230:根据样本道路特征和第三道路特征,确定多个扰动定位位姿对应的扰动映射误差。
针对不同的扰动定位位姿,可以根据步骤S120中提到的映射方式,将样本道路特征和第三道路特征映射到同一坐标系中后确定扰动映射误差。本步骤可以包括以下实施方式;
针对每个扰动定位位姿,根据该扰动定位位姿,以及样本道路特征在样本道路图像中的位置,计算样本道路特征映射至预设地图中的第三映射位置,计算第三映射位置与第三道路特征在预设地图中的位置之间的误差,得到扰动映射误差;或者,
针对每个扰动定位位姿,根据该扰动定位位姿,以及第三道路特征在预设地图中的位置,计算第三道路特征映射至样本道路图像所在坐标系中的第四映射位置,计算第四映射位置与样本道路特征在样本道路图像中的位置之间的误差,得到扰动映射误差。
当已知道路图像中的道路特征和预设地图中匹配成功的道路特征,以及对应的定位位姿时,映射误差match_err可以采用以下函数表示:
match_err=MapMatching(p pose,I seg,I map)
其中,p pose为定位位姿,I seg为道路图像中的道路特征,I map为预设地图中匹配成功的道路特征。
步骤S240:基于预先设定的目标地图区域中的与定位误差相关的映射误差函数,求解映射误差函数与多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数,得到目标地图区域中映射误差与定位误差之间的函数关系。
其中,预先设定的目标地图区域中的与定位误差相关的映射误差函数,可以理解为预设的包含未知量的映射误差函数。例如,可以将映射误差函数设置为以下二次曲面形式:
g(Δx,Δy)=aΔx 2+bΔxΔy+cΔy 2+dΔx+eΔy+f
多个扰动定位位姿对应的扰动映射误差可以采用以下函数表示:
match_err=MapMatching(p gt+Δp,I seg,I map)
本步骤在具体实施时可以包括:
求解以下最小值函数
Figure PCTCN2019113491-appb-000015
得到a 0、b 0、c 0、d 0、e 0和f 0,将求解得到的a 0、b 0、c 0、d 0、e 0和f 0代入g后的函数g 0作为映射误差函数。在标准定位位姿足够精确的情况下,求解得到的g 0应为抛物面。
其中,映射误差函数为g(Δx,Δy),g(Δx,Δy)=aΔx 2+bΔxΔy+cΔy 2+dΔx+eΔy+f;p gt为标准定位位姿,扰动量为Δp={Δx,Δy,0},Δx,Δy∈Ω,Ω为目标地图区域,I seg为样本道路特征,I map为第三道路特征;MapMatching(p gt+Δp,I seg,I map)为多个扰动定位位姿p gt+Δp对应的扰动映射误差。g(Δx,Δy)-MapMatching(p gt+Δp,I seg,I map)表示映射误差函数与多个扰动定位位姿对应的扰动映射误差之间的残差。
Figure PCTCN2019113491-appb-000016
代表以a,b,c,d,e,f为待求解量的最小值函数。‖·‖为范数符号。
针对预设地图中的每个地图区域,均可以采用上述方式求解得到对应的映射误差函数g。
综上,本实施例中,在建立映射误差与定位误差之间的对应关系时,首先得到一个图像帧对应的样本道路特征和预设地图中匹配成功的道路特征,以及该图像帧对应的标准定位位姿,在该标准定位位姿的基础上增加多个扰动量,基于建立的残差函数,求解得到该地图区域中的对应关系。这样能够更快速地建立不同地图区域中的对应关系,也为确定车辆的定位误差提供了可实施的方式。
在本发明的另一实施例中,为了能够更准确地评估视觉定位的有效性,基于图1所示实施例,在确定第一定位位姿所在的目标地图区域之后,该方法还可以包括以下步骤1a~3a。
步骤1a:根据预先确定的各个地图区域对应的道路特征平均量,确定目标地图区域对应的目标道路特征平均量。
其中,道路特征平均量可以理解为道路特征在正常道路图像中占据的比例的平均量。
本实施例中,可以预先通过车辆中的相机模块采集地图区域中的多个正常道路图像,并从每个正常道路图像中确定道路特征所占据的正常比例,根据各个正常比例得到该地图区域对应的道路特征平均量。
从正常道路图像中确定道路特征所占据的正常比例,可以包括:将道路特征占有的像素与正常道路图像的总像素的比例,确定为道路特征所占据的正常比例;或者,将道路特征对应的面积与正常道路图像的总面积的比例,确定为道路特征所占据的正常比例。
正常道路图像可以理解为在相机模块的图像采集区域内无其他物体遮挡,且相机模块也不存在故障时采集得到的道路图像。正常道路图像中的道路特征可以理解为理想状态下确定的道路特征。
步骤2a:根据第一道路特征在道路图像中占据的比例,确定道路图像对应的识别道路特征量。
本步骤,可以直接将第一道路特征在道路图像中占据的比例,确定为道路图像对应的识别道路特征量;也可以对第一道路特征在道路图像中占据的比例执行预设处理后的值,确定为道路图像对应的识别道路特征量。
本步骤可以确定第一道路特征在道路图像中占据的比例,具体可以包括:将第一道路特征占有的像素与道路图像的总像素的比例,或者,将第一道路特征对应的面积与道路图像的总面积的比例,确定为第一道路特征在道路图像中占据的比例。
步骤3a:根据识别道路特征量与目标道路特征平均量之间的大小关系,确定针对第一定位位姿的定位质量。
本步骤具体可以包括:判断目标道路特征平均量与识别道路特征量之间的差值是否小于预设特征量差值,如果是,则确定针对第一定位位姿的定位质量为好;如果否,则确定针对第一定位位姿的定位质量为差。
本步骤也可以预先根据目标道路特征平均量设置不同的区间,并设置不同区间对应不同的定位质量数值,可以根据不同区间对应不同的定位质量数值,确定识别道路特征量对应的目标定位质量数值。这种方式能够更精细、更量化定位质量。
当针对第一定位位姿的定位质量较好时,认为当前相机的图像采集区域内无遮挡,图像中的有效信息更多,视觉定位的有效性更好。当针对第一定位位姿的定位质量较差时,认为当前相机的图像采集区域内可能存在遮挡物,或者设备可能存在故障,图像中的有效信息较少,视觉定位的有效性较差。
综上,本实施例中,根据道路图像中的道路特征与该目标地图区域对应的道路特征平均量之间的大小关系,能够评估道路图像中道路特征的质量,例如评估道路图像中是否有遮挡或者设备是否存在故障等异常情况,进而能够评估定位质量,为对视觉定位进行评估提供了更丰富的评价指标。
在本发明的另一实施例中,基于上述实施例,该方法还可以包括:
获取连续的预设数量个道路图像帧对应的定位质量和定位精度;当预设数量个道路图像帧对应的定位质量小于预设定位质量,并且预设数量个道路图像帧对应的定位精度小于预设定位精度时,确定基于道路图像的视觉定位失效。
当预设数量个道路图像帧对应的定位质量不小于预设定位质量,并且预设数量个道路图像帧对应的定位精度不小于预设定位精度时,可以确定基于道路图像的视觉定位效果较好。
综上,本实施例中根据定位质量和定位精度,能够综合判断视觉定位的效果,并且对视觉定位失效进行更准确判断,以便使得设备在视觉定位失效时及时采取有效的应对措施,提高车辆定位的稳定性。
在本发明的另一实施例中,基于图1所示实施例,步骤S110中根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿的步骤,可以包括以下步骤1b~4b。
步骤1b:确定车辆的估计位姿。
确定车辆的估计位姿时,可以根据车辆的上一定位位姿确定该估计位姿。例如,可以直接将上一定位位姿确定为估计位姿,也可以将对上一定位位姿做预设变换后的位姿作为估计位姿。
本实施例中,根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿的步骤,可以按照预设频率实施。
步骤2b:基于车辆的估计位姿,确定第一道路特征与第二道路特征之间的参考映射误差。
本步骤中确定参考映射误差时可以参考步骤S120中提供的两种映射方式中的一种,将第一道路特征与第二道路特征映射至同一坐标系后确定两者之间的参考映射误差。
步骤3b:当参考映射误差大于预设误差阈值时,调整车辆的估计位姿,并执行步骤2b中基于车辆的估计位姿,确定第一道路特征与第二道路特征之间的参考映射误差的步骤。
当参考映射误差大于预设误差阈值时,认为该估计位姿与车辆的真实定位位姿之间还存在较大差距,可以继续进行迭代。
步骤4b:当参考映射误差不大于所述预设误差阈值时,根据车辆的当前估计位姿确定车辆的第一定位位姿。
当参考映射误差不大于所述预设误差阈值时,认为该估计位姿与车辆的真实定位位姿非常接近,定位精度已经达到要求。
综上,本实施例提供了基于道路图像的道路特征与预设地图中的道路特征之间的匹配结果,通过迭代方式确定车辆的定位位姿的方式,能够更准确地确定车辆的定位位姿。
在确定道路图像中的第一道路特征时,可以将道路图像转换至俯视图坐标系下,得到地面图像;对地面图像进行二值化处理,得到处理后图像;根据处理后图像中的信息,确定道路图像的道路特征。
其中,地面图像可以为灰度图像。对地面图像进行二值化处理时,可以采用大津法确定用于区分地面图像前景与背景部分的像素阈值,根据该确定的像素阈值对地面图像进行二值化处理,得到包含前景部分的处理后图像。
根据处理后的图像中的信息确定道路图像的道路特征时,可以直接将处理后的图像作为道路特征,也可以根据处理后图像中各个标志物之间的相对位置信息作为道路特征。
图3为本发明实施例提供的车载终端的一种结构示意图。该车载终端,包括:处理器310和图像采集设备320。其中,处理器310包括:特征获取模块、映射确定模块、区域确定模块和精度确定模块。(图中未示出)
图像采集设备320,用于采集道路图像;
特征获取模块,用于在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取预设地图中与第一道路特征匹配成功的第二道路特征;
映射确定模块,用于确定第一道路特征与第二道路特征之间的第一映射误差;
区域确定模块,用于从预设地图包含的多个不同地图区域中,确定第一定位位姿所在的目标地图区域;
精度确定模块,用于根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定第一映射误差对应的第一定位误差,作为第一定位位姿的定位精度。
在本发明的另一实施例中,基于图3所示实施例,精度确定模块,具体用于:
将第一映射误差cost代入以下预先建立的目标地图区域中的映射误差函数g 0,求解得到多个定位误差(Δx,Δy):
g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0
其中,a 0、b 0、c 0、d 0、e 0、f 0为预先确定的函数系数;
将求解得到的多个定位误差中的最大值确定为与第一映射误差对应的第一定位误差r:
Figure PCTCN2019113491-appb-000017
其中,
Figure PCTCN2019113491-appb-000018
Figure PCTCN2019113491-appb-000019
Figure PCTCN2019113491-appb-000020
C=2(a 0e 0 2+c 0d 0 2+(f 0-cost)b 0 2-2b 0d 0e 0-a 0c 0(f 0-cost))。
在本发明的另一实施例中,基于图3所示实施例,处理器310还包括:关系建立模块(图中未示出);关系建立模块,用于采用以下操作建立目标地图区域中映射误差与定位误差之间的对应关系:
获取在目标地图区域中采集的样本道路图像和对应的样本道路特征,以及样本道路图像对应的车辆的标准定位位姿,获取预设地图中与样本道路特征匹配成功的第三道路特征;
对标准定位位姿增加多个不同的扰动量,得到多个扰动定位位姿;
根据样本道路特征和第三道路特征,确定多个扰动定位位姿对应的扰动映射误差;
基于预先设定的目标地图区域中的与定位误差相关的映射误差函数,求解映射误差函数与多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数,得到目标地图区域中映射误差与定位误差之间的函数关系。
在本发明的另一实施例中,基于图3所示实施例,关系建立模块,求解映射误差函数与多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数时,包括:
求解以下最小值函数
Figure PCTCN2019113491-appb-000021
得到a 0、b 0、c 0、d 0、e 0和f 0,将求解得到的a 0、b 0、c 0、d 0、e 0和f 0代入g后的函数作为映射误差函数;
其中,映射误差函数为g(Δx,Δy),g(Δx,Δy)=aΔx 2+bΔxΔy+cΔy 2+dΔx+eΔy+f;p gt为标准定位位姿,扰动量为Δp={Δx,Δy,0},Δx,Δy∈Ω,Ω为目标地图区域,I seg为样本道路特征,I map为第三道路特征;MapMatching(p gt+Δp,I seg,I map)为多个扰动定位位姿p gt+Δp对应的扰动映射误差。
在本发明的另一实施例中,基于图3所示实施例,处理器310还包括:
平均量确定模块(图中未示出),用于在确定第一定位位姿所在的目标地图区域之后,根据预先确定的各个地图区域对应的道路特征平均量,确定目标地图区域对应的目标道路特征平均量;
识别量确定模块(图中未示出),用于根据第一道路特征在道路图像中占据的比例,确定道路图像对应的识别道路特征量;
质量确定模块(图中未示出),用于根据识别道路特征量与目标道路特征平均量之间的大小关系,确定针对第一定位位姿的定位质量。
在本发明的另一实施例中,基于图3所示实施例,处理器310还包括:
失效确定模块(图中未示出),用于获取连续的预设数量个道路图像帧对应的定位质量和定位精度;当预设数量个道路图像帧对应的定位质量小于预设定位质量,并且预设数量个道路图像帧对应的定位精度小于预设定位精度时,确定基于道路图像的视觉定位失效。
在本发明的另一实施例中,基于图3所示实施例,处理器310还包括:视觉定位模块(图中未示出),用于根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿;
视觉定位模块,具体用于:
确定车辆的估计位姿;
基于车辆的估计位姿,确定第一道路特征与第二道路特征之间的参考映射误差;
当参考映射误差大于预设误差阈值时,调整车辆的估计位姿,并执行基于车辆的估计位姿,确定第一道路特征与第二道路特征之间的参考映射误差的步骤;
当参考映射误差不大于预设误差阈值时,根据车辆的当前估计位姿确定车辆的第一定位位姿。
在本发明的另一实施例中,基于图3所示实施例,映射确定模块,具体用于:
根据第一定位位姿,以及第一道路特征在道路图像中的位置,计算第一道路特征映射至预设地图中的第一映射位置;计算第一映射位置与第二道路特征在预设地图中的位置之间的误差,得到第一映射误差;或者,
根据第一定位位姿,以及第二道路特征在预设地图中的位置,计算第二道路特征映射至道路图像所在坐标系中的第二映射位置;计算第一道路特征在道路图像中的位置与第二映射位置之间的误差,得到第一映射误差。
该终端实施例与图1所示方法实施例是基于同一发明构思得到的实施例,相关之处可以相互参照。上述终端实施例与方法实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。
本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。
本领域普通技术人员可以理解:实施例中的装置中的模块可以按照实施例描述分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的 技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。

Claims (10)

  1. 一种视觉定位效果自检方法,其特征在于,包括:
    在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取所述预设地图中与所述第一道路特征匹配成功的第二道路特征;
    确定所述第一道路特征与所述第二道路特征之间的第一映射误差;
    从所述预设地图包含的多个不同地图区域中,确定所述第一定位位姿所在的目标地图区域;
    根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差,作为所述第一定位位姿的定位精度。
  2. 如权利要求1所述的方法,其特征在于,所述根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差的步骤,包括:
    将所述第一映射误差cost代入以下预先建立的目标地图区域中的映射误差函数g 0,求解得到多个定位误差(Δx,Δy):
    g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0
    其中,所述a 0、b 0、c 0、d 0、e 0、f 0为预先确定的函数系数;
    将求解得到的多个定位误差中的最大值确定为与所述第一映射误差对应的第一定位误差r:
    Figure PCTCN2019113491-appb-100001
    其中,
    Figure PCTCN2019113491-appb-100002
    Figure PCTCN2019113491-appb-100003
    Figure PCTCN2019113491-appb-100004
    C=2(a 0e 0 2+c 0d 0 2+(f 0-cost)b 0 2-2b 0d 0e 0-a 0c 0(f 0-cost))。
  3. 如权利要求1或2所述的方法,其特征在于,采用以下方式建立目标地图区域中映射误差与定位误差之间的对应关系:
    获取在所述目标地图区域中采集的样本道路图像和对应的样本道路特征,以及所述样本道路图像对应的所述车辆的标准定位位姿,获取所述预设地图中与所述样本道路特征匹配成功的第三道路特征;
    对所述标准定位位姿增加多个不同的扰动量,得到多个扰动定位位姿;
    根据所述样本道路特征和第三道路特征,确定多个扰动定位位姿对应的扰动映射误差;
    基于预先设定的所述目标地图区域中的与定位误差相关的映射误差函数,求解所述映射误差函数与所述多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数,得到所述目标地图区域中映射误差与定位误差之间的函数关系。
  4. 如权利要求3所述的方法,其特征在于,所述求解所述映射误差函数与所述多个扰动定位位姿对应的扰动映射误差之间的残差取最小值时的映射误差函数的步骤,包括:
    求解以下最小值函数
    Figure PCTCN2019113491-appb-100005
    得到a 0、b 0、c 0、d 0、e 0和f 0,将求解得到的所述a 0、b 0、c 0、d 0、e 0和f 0代入g后的函数作为映射误差函数;
    其中,所述映射误差函数为g(Δx,Δy),g(Δx,Δy)=aΔx 2+bΔxΔy+cΔy 2+dΔx+eΔy+f;所述p gt为所述标准定位位姿,所述扰动量为Δp={Δx,Δy,0},Δx,Δy∈Ω,所述Ω为所述目标地图区域,所述I seg为所述样本道路特征,所述I map为所述第三道路特征;所述MapMatching(p gt+Δp,I seg,I map)为多个扰动定位位姿p gt+Δp对应的扰动映射误差。
  5. 如权利要求1所述的方法,其特征在于,在确定所述第一定位位姿所在的目标地图区域之后,所述方法还包括:
    根据预先确定的各个地图区域对应的道路特征平均量,确定所述目标地图区域对应的目标道路特征平均量;
    根据所述第一道路特征在所述道路图像中占据的比例,确定所述道路图像对应的识别道路特征量;
    根据所述识别道路特征量与所述目标道路特征平均量之间的大小关系,确定针对所述第一定位位姿的定位质量。
  6. 如权利要求5所述的方法,其特征在于,所述方法还包括:
    获取连续的预设数量个道路图像帧对应的定位质量和定位精度;
    当所述预设数量个道路图像帧对应的定位质量小于预设定位质量,并且所述预设数量个道路图像帧对应的定位精度小于预设定位精度时,确定基于道路图像的视觉定位失效。
  7. 如权利要求1所述的方法,其特征在于,所述根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿的步骤,包括:
    确定所述车辆的估计位姿;
    基于所述车辆的估计位姿,确定所述第一道路特征与所述第二道路特征之间的参考映射误差;
    当所述参考映射误差大于预设误差阈值时,调整所述车辆的估计位姿,并执行所述基于所述车辆的估计位姿,确定所述第一道路特征与所述第二道路特征之间的参考映射误差的步骤;
    当所述参考映射误差不大于所述预设误差阈值时,根据所述车辆的当前估计位姿确定所述车辆的第一定位位姿。
  8. 如权利要求1所述的方法,其特征在于,所述确定所述第一道路特征与所述第二道路特征之间的第一映射误差的步骤,包括:
    根据所述第一定位位姿,以及所述第一道路特征在所述道路图像中的位置,计算所述第一道路特征映射至所述预设地图中的第一映射位置;计算所述第一映射位置与所述第二道路特征在所述预设地图中的位置之间的误差,得到第一映射误差;或者,
    根据所述第一定位位姿,以及所述第二道路特征在所述预设地图中的位置,计算所述第二道路特征映射至所述道路图像所在坐标系中的第二映射位置;计算所述第一道路特征在所述道路图像中的位置与所述第二映射位置之间的误差,得到第一映射误差。
  9. 一种车载终端,其特征在于,包括:处理器和图像采集设备;所述处理器包括:特征获取模块、映射确定模块、区域确定模块和精度确定模块;
    所述图像采集设备,用于采集道路图像;
    所述特征获取模块,用于在根据道路图像中的第一道路特征与预设地图中预先建立的道路特征之间的匹配结果进行车辆定位,得到车辆的第一定位位姿时,获取所述预设地图中与所述第一道路特征匹配成功的第二道路特征;
    所述映射确定模块,用于确定所述第一道路特征与所述第二道路特征之间的第一映射误差;
    所述区域确定模块,用于从所述预设地图包含的多个不同地图区域中,确定所述第一定位位姿所在的目标地图区域;
    所述精度确定模块,用于根据预先建立的目标地图区域中映射误差与定位误差之间的对应关系,确定所述第一映射误差对应的第一定位误差,作为所述第一定位位姿的定位精度。
  10. 如权利要求9所述的终端,其特征在于,所述精度确定模块,具体用于:
    将所述第一映射误差cost代入以下预先建立的目标地图区域中的映射误差函数g 0,求解得到多个定位误差(Δx,Δy):
    g 0(Δx,Δy)=a 0Δx 2+b 0ΔxΔy+c 0Δy 2+d 0Δx+e 0Δy+f 0
    其中,所述a 0、b 0、c 0、d 0、e 0、f 0为预先确定的函数系数;
    将求解得到的多个定位误差中的最大值确定为与所述第一映射误差对应的第一定位误差r:
    Figure PCTCN2019113491-appb-100006
    其中,
    Figure PCTCN2019113491-appb-100007
    Figure PCTCN2019113491-appb-100008
    Figure PCTCN2019113491-appb-100009
    C=2(a 0e 0 2+c 0d 0 2+(f 0-cost)b 0 2-2b 0d 0e 0-a 0c 0(f 0-cost))。
PCT/CN2019/113491 2019-07-26 2019-10-26 一种视觉定位效果自检方法及车载终端 WO2021017213A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112019007454.7T DE112019007454T5 (de) 2019-07-26 2019-10-26 Verfahren zur Selbstinspektion eines visuellen Positionierungseffekts und fahrzeugmontiertes Terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910681735.8 2019-07-26
CN201910681735.8A CN112307810B (zh) 2019-07-26 2019-07-26 一种视觉定位效果自检方法及车载终端

Publications (1)

Publication Number Publication Date
WO2021017213A1 true WO2021017213A1 (zh) 2021-02-04

Family

ID=74229621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113491 WO2021017213A1 (zh) 2019-07-26 2019-10-26 一种视觉定位效果自检方法及车载终端

Country Status (3)

Country Link
CN (1) CN112307810B (zh)
DE (1) DE112019007454T5 (zh)
WO (1) WO2021017213A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210319584A1 (en) * 2020-04-09 2021-10-14 Tusimple, Inc. Camera pose estimation techniques
CN114427863A (zh) * 2022-04-01 2022-05-03 天津天瞳威势电子科技有限公司 车辆定位方法及系统、自动泊车方法及系统、存储介质
CN115143996A (zh) * 2022-09-05 2022-10-04 北京智行者科技股份有限公司 定位信息修正方法及电子设备和存储介质
US11740093B2 (en) 2018-02-14 2023-08-29 Tusimple, Inc. Lane marking localization and fusion
US11852498B2 (en) 2018-02-14 2023-12-26 Tusimple, Inc. Lane marking localization
US11874130B2 (en) 2017-08-22 2024-01-16 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114993328B (zh) * 2022-05-18 2023-03-10 禾多科技(北京)有限公司 车辆定位评估方法、装置、设备和计算机可读介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010210477A (ja) * 2009-03-11 2010-09-24 Clarion Co Ltd ナビゲーション装置
CN104359464A (zh) * 2014-11-02 2015-02-18 天津理工大学 基于立体视觉的移动机器人定位方法
CN107643086A (zh) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 一种车辆定位方法、装置及系统
CN109115231A (zh) * 2018-08-29 2019-01-01 东软睿驰汽车技术(沈阳)有限公司 一种车辆定位方法、设备及自动驾驶车辆
CN109298629A (zh) * 2017-07-24 2019-02-01 来福机器人 用于为自主和非自主位置意识提供鲁棒跟踪的容错

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3853611T2 (de) * 1987-05-11 1996-01-18 Sumitomo Electric Industries Positionsbestimmungssystem.
CN106535134B (zh) * 2016-11-22 2020-02-11 上海斐讯数据通信技术有限公司 一种基于wifi的多房间定位方法及服务器
CN108280866B (zh) * 2016-12-30 2021-07-27 法法汽车(中国)有限公司 道路点云数据处理方法及系统
CN108534782B (zh) * 2018-04-16 2021-08-17 电子科技大学 一种基于双目视觉系统的地标地图车辆即时定位方法
CN109472830A (zh) * 2018-09-28 2019-03-15 中山大学 一种基于无监督学习的单目视觉定位方法
CN109887032B (zh) * 2019-02-22 2021-04-13 广州小鹏汽车科技有限公司 一种基于单目视觉slam的车辆定位方法及系统
CN110018688B (zh) * 2019-04-11 2022-03-29 清华大学深圳研究生院 基于视觉的自动引导车定位方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010210477A (ja) * 2009-03-11 2010-09-24 Clarion Co Ltd ナビゲーション装置
CN104359464A (zh) * 2014-11-02 2015-02-18 天津理工大学 基于立体视觉的移动机器人定位方法
CN107643086A (zh) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 一种车辆定位方法、装置及系统
CN109298629A (zh) * 2017-07-24 2019-02-01 来福机器人 用于为自主和非自主位置意识提供鲁棒跟踪的容错
CN109115231A (zh) * 2018-08-29 2019-01-01 东软睿驰汽车技术(沈阳)有限公司 一种车辆定位方法、设备及自动驾驶车辆

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YE ZHANG-HUA, YANG FENG , LIU GUO-QING , LI DONG-RI , ZHAO WEI-DONG , ZOU CHING-BIN: "Positioning of Vehicle Road Position with Aerial Image and Vehicle-borne Images", SCIENCE TECHNOLOGY AND ENGINEERING, vol. 19, no. 3, 28 January 2019 (2019-01-28), pages 239 - 246, XP055776480, ISSN: 1671-1815 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11874130B2 (en) 2017-08-22 2024-01-16 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors
US11740093B2 (en) 2018-02-14 2023-08-29 Tusimple, Inc. Lane marking localization and fusion
US11852498B2 (en) 2018-02-14 2023-12-26 Tusimple, Inc. Lane marking localization
US20210319584A1 (en) * 2020-04-09 2021-10-14 Tusimple, Inc. Camera pose estimation techniques
US11810322B2 (en) * 2020-04-09 2023-11-07 Tusimple, Inc. Camera pose estimation techniques
CN114427863A (zh) * 2022-04-01 2022-05-03 天津天瞳威势电子科技有限公司 车辆定位方法及系统、自动泊车方法及系统、存储介质
CN115143996A (zh) * 2022-09-05 2022-10-04 北京智行者科技股份有限公司 定位信息修正方法及电子设备和存储介质

Also Published As

Publication number Publication date
CN112307810A (zh) 2021-02-02
CN112307810B (zh) 2023-08-04
DE112019007454T5 (de) 2022-03-03

Similar Documents

Publication Publication Date Title
WO2021017213A1 (zh) 一种视觉定位效果自检方法及车载终端
WO2021017212A1 (zh) 一种多场景高精度车辆定位方法、装置及车载终端
CN107703528B (zh) 自动驾驶中结合低精度gps的视觉定位方法及系统
US10671862B2 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
CN108885791B (zh) 地面检测方法、相关装置及计算机可读存储介质
US10699134B2 (en) Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line
WO2018142900A1 (ja) 情報処理装置、データ管理装置、データ管理システム、方法、及びプログラム
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
US9569673B2 (en) Method and device for detecting a position of a vehicle on a lane
WO2020253010A1 (zh) 一种泊车定位中的停车场入口定位方法、装置及车载终端
JP7422105B2 (ja) 路側計算装置に用いる障害物3次元位置の取得方法、装置、電子デバイス、コンピュータ可読記憶媒体、及びコンピュータプログラム
CN114419165B (zh) 相机外参校正方法、装置、电子设备和存储介质
KR102103944B1 (ko) 모노카메라를 이용한 자율주행 차량의 거리 및 위치 추정 방법
WO2021017211A1 (zh) 一种基于视觉的车辆定位方法、装置及车载终端
JP2015194397A (ja) 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラムならびに車両位置検出システム
CN112017238A (zh) 一种线状物体的空间位置信息确定方法及装置
CN112017236A (zh) 一种基于单目相机计算目标物位置的方法及装置
CN112446915B (zh) 一种基于图像组的建图方法及装置
CN113744315A (zh) 一种基于双目视觉的半直接视觉里程计
CN112304322B (zh) 一种视觉定位失效后的重启方法及车载终端
CN113971697A (zh) 一种空地协同车辆定位定向方法
KR102195040B1 (ko) 이동식 도면화 시스템 및 모노카메라를 이용한 도로 표지 정보 수집 방법
CN116777966A (zh) 一种农田路面环境下车辆航向角的计算方法
CN113112551B (zh) 相机参数的确定方法、装置、路侧设备和云控平台
CN113643359A (zh) 一种目标对象定位方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19940100

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19940100

Country of ref document: EP

Kind code of ref document: A1