CN112304322B - Restarting method after visual positioning failure and vehicle-mounted terminal - Google Patents

Restarting method after visual positioning failure and vehicle-mounted terminal Download PDF

Info

Publication number
CN112304322B
CN112304322B CN201910681733.9A CN201910681733A CN112304322B CN 112304322 B CN112304322 B CN 112304322B CN 201910681733 A CN201910681733 A CN 201910681733A CN 112304322 B CN112304322 B CN 112304322B
Authority
CN
China
Prior art keywords
positioning
pose
vehicle
road
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910681733.9A
Other languages
Chinese (zh)
Other versions
CN112304322A (en
Inventor
姜秀宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co ltd
Original Assignee
Beijing Momenta Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momenta Technology Co ltd filed Critical Beijing Momenta Technology Co ltd
Priority to CN201910681733.9A priority Critical patent/CN112304322B/en
Publication of CN112304322A publication Critical patent/CN112304322A/en
Application granted granted Critical
Publication of CN112304322B publication Critical patent/CN112304322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Abstract

The embodiment of the invention discloses a restarting method after visual positioning failure and a vehicle-mounted terminal. The method comprises the following steps: when the visual positioning failure of the vehicle in the parking lot is detected, carrying out track calculation on first data acquired by the inertia measurement unit based on a first positioning pose of the vehicle before the visual positioning failure to obtain a second positioning pose; when the position indicated by the second positioning pose is determined to be in the initialization area, determining a third positioning pose of the vehicle through a pose regression model based on the road characteristics of the first road image and the second positioning pose acquired by the image acquisition equipment; according to the third positioning pose, matching the road characteristics of the first road image with the road characteristics of each position point in a preset map, and determining a fourth positioning pose of the vehicle according to the matching result; and starting visual positioning based on the fourth positioning pose. By applying the scheme provided by the embodiment of the invention, the visual positioning can be restarted after the visual positioning fails.

Description

Restarting method after visual positioning failure and vehicle-mounted terminal
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a restarting method after visual positioning failure and a vehicle-mounted terminal.
Background
In the technical field of intelligent driving, positioning of vehicles is an important link in intelligent driving. Generally, when a vehicle runs outdoors, an accurate positioning pose of the vehicle can be determined after comprehensive positioning according to data collected by a Global Navigation Satellite System (GNSS) and an Inertial Measurement Unit (IMU). When the vehicle runs into a parking lot with weak satellite positioning signals or no signals, in order to accurately determine the positioning pose of the vehicle, a mode of combining visual positioning and an IMU (inertial measurement unit) can be adopted.
When the visual positioning is adopted, the corresponding relation between the high-precision map and the road features in the parking lot can be generally established in advance, when the camera collects the road image, the road features in the road image are matched with the road features in the high-precision map based on the initial vehicle pose for starting the visual positioning, and the vehicle pose based on the visual positioning can be determined according to the matching result. In practical application, the visual positioning may be disabled due to reasons such as road features in a road image being blocked or equipment failure. Therefore, there is a need for a method of restarting visual alignment after the visual alignment has failed.
Disclosure of Invention
The invention provides a restarting method after visual positioning failure and a vehicle-mounted terminal, which are used for restarting visual positioning after the visual positioning failure. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention discloses a restart method after a visual positioning failure, including:
when the visual positioning failure of the vehicle in the parking lot is detected, acquiring first data acquired by an inertia measurement unit;
performing track calculation on the first data based on a first positioning pose of the vehicle before the visual positioning fails to obtain a second positioning pose of the vehicle;
when the position indicated by the second positioning pose is determined to be in a preset initialization area in the parking lot, acquiring a first road image in the parking lot, acquired by image acquisition equipment; wherein the first road image is an image collected in the initialization area;
determining a third positioning pose of the vehicle through a pose regression model based on the road characteristics of the first road image and the second positioning pose; the pose regression model is obtained by training in advance according to a plurality of sample road images collected in the initialization area, corresponding sample vehicle poses and labeled vehicle poses;
according to the third positioning pose, matching the road characteristics of the first road image with the road characteristics of each position point in a preset map, and determining a fourth positioning pose of the vehicle according to a matching result;
and starting visual positioning based on the fourth positioning pose.
Optionally, the following method is adopted to detect whether the visual positioning of the vehicle in the parking lot is invalid:
when a vehicle is positioned according to a matching result between a first road feature in a second road image and a road feature pre-established in a preset map to obtain a to-be-detected pose of the vehicle, acquiring a second road feature which is successfully matched with the first road feature in the preset map; wherein the second road image is an image collected in a parking lot;
determining a mapping error between the first road feature and the second road feature;
determining a target map area where the pose to be detected is located from a plurality of different map areas contained in the preset map;
determining a first positioning error corresponding to a mapping error according to a corresponding relation between the mapping error and the positioning error in a pre-established target map area, and taking the first positioning error as the positioning precision of the pose to be detected;
and determining whether the visual positioning of the vehicle in the parking lot is invalid or not according to the size relation between the positioning precision of the pose to be detected and a preset precision threshold.
Optionally, the step of determining a first positioning error corresponding to the mapping error according to a mapping error and a positioning error in a pre-established target map region includes:
substituting the mapping error cost into a mapping error function g in a target map region established in advance 0 Solving for a plurality of positioning errors (Δ x, Δ y):
g 0 (Δx,Δy)=a 0 Δx 2 +b 0 ΔxΔy+c 0 Δy 2 +d 0 Δx+e 0 Δy+f 0
wherein, the a 0 、b 0 、c 0 、d 0 、e 0 、f 0 Is a predetermined function coefficient;
determining the maximum value of the plurality of positioning errors obtained by the solution as a first positioning error r corresponding to the mapping error:
Figure BDA0002144999350000031
wherein the content of the first and second substances,
Figure BDA0002144999350000032
and is
Figure BDA0002144999350000033
Figure BDA0002144999350000034
C=2(a 0 e 0 2 +c 0 d 0 2 +(f 0 -cost)b 0 2 -2b 0 d 0 e 0 -a 0 c 0 (f 0 -cost))。
Optionally, the corresponding relationship between the mapping error and the positioning error in the target map region is established in the following manner:
acquiring a sample road image and corresponding sample road features acquired in the target map area, and a standard positioning pose of the vehicle corresponding to the sample road image, and acquiring third road features which are successfully matched with the sample road features in the preset map;
adding a plurality of different disturbance quantities to the standard positioning pose to obtain a plurality of disturbance positioning poses;
determining disturbance mapping errors corresponding to a plurality of disturbance positioning poses according to the sample road characteristics and the third road characteristics;
and solving a mapping error function when the residual errors between the mapping error function and the disturbance mapping errors corresponding to the plurality of disturbance positioning poses take the minimum value based on a preset mapping error function related to the positioning errors in the target map area to obtain a functional relation between the mapping errors and the positioning errors in the target map area.
Optionally, the step of solving the mapping error function when the residual error between the mapping error function and the perturbation mapping errors corresponding to the multiple perturbation positioning poses takes the minimum value includes:
solving the following minimum function
Figure BDA0002144999350000035
To obtain a 0 、b 0 、c 0 、d 0 、e 0 And f 0 A obtained by solving 0 、b 0 、c 0 、d 0 、e 0 And f 0 Substituting the function into g as a mapping error function;
wherein the mapping error function is g (Δ x, Δ y), g (Δ x, Δ y) = a Δ x 2 +bΔxΔy+cΔy 2 + d Δ x + e Δ y + f; said p is gt For the standard positioning pose, the disturbance quantity is delta p = { delta x, delta y,0}, delta x, delta y is equal to omega, the omega is the target map area, and the I is seg As the sample road characteristics, the map Is the third road characteristic; the MapMatching (p) gt +Δp,I seg ,I map ) Locating poses p for multiple perturbations gt + Δ p corresponds to the perturbation mapping error.
Optionally, the pose regression model is obtained by training in the following way:
acquiring a plurality of sample parking lot images acquired in the initialization area, and a sample vehicle pose and a labeled vehicle pose corresponding to each sample parking lot image;
detecting road characteristics of each sample parking lot image;
determining a reference vehicle pose through model parameters in a pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose;
determining an amount of difference between the reference vehicle pose and the annotated vehicle pose;
when the difference is larger than a preset difference threshold value, correcting the model parameters, returning to execute the step of determining the reference vehicle pose through model parameters in a pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose;
and when the difference is not greater than the preset difference threshold value, determining that the pose regression model is trained.
Optionally, the step of starting visual positioning based on the fourth positioning pose includes:
performing track speculation on second data acquired by the wheel speed detection equipment based on the first positioning poses to obtain a plurality of vehicle positioning poses;
acquiring a plurality of fourth positioning poses of the vehicle corresponding to the plurality of first road images; wherein the plurality of first road images are images acquired in the initialization area;
determining residuals between the plurality of fourth positioning poses and the plurality of vehicle positioning poses;
and when the residual error is smaller than a preset residual error threshold value, starting visual positioning based on the fourth positioning poses.
Optionally, the step of determining residuals between the plurality of fourth positioning poses and the plurality of vehicle positioning poses includes:
solving the following function by a least square method to obtain a rigid transformation matrix T between a plurality of fourth positioning poses and a plurality of vehicle positioning poses:
Figure BDA0002144999350000041
substituting the T obtained by solving into
Figure BDA0002144999350000051
Obtaining residual errors between the fourth positioning poses and the vehicle positioning poses;
wherein, the
Figure BDA0002144999350000052
For the ith fourth positioning pose, the
Figure BDA0002144999350000053
And positioning the ith vehicle, wherein N is the fourth positioning pose or the total number of the vehicle positioning poses, min is a minimum function, and | | · | | | is a norm symbol.
In a second aspect, an embodiment of the present invention discloses a vehicle-mounted terminal, including: the system comprises a processor, image acquisition equipment and an inertia measurement unit; the processor includes: the system comprises a data acquisition module, a pose calculation module, an image acquisition module, a first determination module, a second determination module and a vision starting module;
the data acquisition module is used for acquiring first data acquired by the inertia measurement unit when the visual positioning failure of the vehicle in the parking lot is detected;
the pose calculation module is used for calculating a track of the first data based on a first positioning pose of the vehicle before the visual positioning failure to obtain a second positioning pose of the vehicle;
the image acquisition module is used for acquiring a first road image in the parking lot, which is acquired by the image acquisition equipment, when the position indicated by the second positioning pose is determined to be in a preset initialization area in the parking lot; wherein the first road image is an image collected in the initialization area;
a first determination module, configured to determine, based on the road characteristics of the first road image and the second positioning pose, a third positioning pose of the vehicle through a pose regression model; the pose regression model is obtained by training in advance according to a plurality of sample road images collected in the initialization area, corresponding sample vehicle poses and labeled vehicle poses;
the second determining module is used for matching the road characteristics of the first road image with the road characteristics of each position point in a preset map according to the third positioning pose and determining a fourth positioning pose of the vehicle according to a matching result;
and the visual starting module is used for starting visual positioning based on the fourth positioning pose.
Optionally, the processor further includes: a failure detection module for detecting whether visual positioning of the vehicle within the parking lot is failed using:
when a vehicle is positioned according to a matching result between a first road feature in a second road image and a road feature pre-established in a preset map to obtain a position and posture to be detected of the vehicle, acquiring a second road feature which is successfully matched with the first road feature in the preset map; wherein the second road image is an image collected in a parking lot;
determining a mapping error between the first road feature and the second road feature;
determining a target map area where the pose to be detected is located from a plurality of different map areas contained in the preset map;
determining a first positioning error corresponding to a mapping error according to a corresponding relation between the mapping error and the positioning error in a pre-established target map area, and taking the first positioning error as the positioning precision of the pose to be detected;
and determining whether the visual positioning of the vehicle in the parking lot is invalid or not according to the size relation between the positioning precision of the pose to be detected and a preset precision threshold.
Optionally, when the failure detection module determines the first positioning error corresponding to the mapping error according to the correspondence between the mapping error and the positioning error in the pre-established target map area, the failure detection module includes:
substituting the mapping error cost into a mapping error function g in a target map region established in advance below 0 Solving for a plurality of positioning errors (Δ x, Δ y):
g 0 (Δx,Δy)=a 0 Δx 2 +b 0 ΔxΔy+c 0 Δy 2 +d 0 Δx+e 0 Δy+f 0
wherein, the a 0 、b 0 、c 0 、d 0 、e 0 、f 0 Is a predetermined function coefficient;
determining the maximum value of the plurality of positioning errors obtained by the solution as a first positioning error r corresponding to the mapping error:
Figure BDA0002144999350000061
wherein the content of the first and second substances,
Figure BDA0002144999350000062
and is
Figure BDA0002144999350000063
Figure BDA0002144999350000064
C=2(a 0 e 0 2 +c 0 d 0 2 +(f 0 -cost)b 0 2 -2b 0 d 0 e 0 -a 0 c 0 (f 0 -cost))。
Optionally, the processor includes: the relation establishing module is used for establishing the corresponding relation between the mapping error and the positioning error in the target map area by adopting the following operations:
acquiring a sample road image and corresponding sample road features acquired in the target map area, and a standard positioning pose of the vehicle corresponding to the sample road image, and acquiring third road features which are successfully matched with the sample road features in the preset map;
adding a plurality of different disturbance amounts to the standard positioning poses to obtain a plurality of disturbance positioning poses;
determining disturbance mapping errors corresponding to a plurality of disturbance positioning poses according to the sample road characteristics and the third road characteristics;
and solving a mapping error function when the residual errors between the mapping error function and the disturbance mapping errors corresponding to the plurality of disturbance positioning poses take the minimum value based on a preset mapping error function related to the positioning errors in the target map area to obtain a functional relation between the mapping errors and the positioning errors in the target map area.
Optionally, when the relationship establishing module is configured to solve the mapping error function when the residual between the mapping error function and the disturbance mapping errors corresponding to the multiple disturbance positioning poses takes the minimum value, the method includes:
solving the following minimum function
Figure BDA0002144999350000071
To obtain a 0 、b 0 、c 0 、d 0 、e 0 And f 0 A obtained by solving 0 、b 0 、c 0 、d 0 、e 0 And f 0 Substituting the function into g as a mapping error function;
wherein the mapping error function is g (Δ x, Δ y), g (Δ x, Δ y) = a Δ x 2 +bΔxΔy+cΔy 2 + d Δ x + e Δ y + f; said p is gt For the standard positioning pose, the disturbance quantity is delta p = { delta x, delta y,0}, delta x, delta y is equal to omega, the omega is the target map area, and the I is seg As the sample road characteristics, the map Is the third road characteristic; the MapMatching (p) gt +Δp,I seg ,I map ) Locating poses p for multiple perturbations gt + Δ p corresponds to the perturbation mapping error.
Optionally, the processor further includes: the model training module is used for training to obtain the pose regression model by adopting the following operations:
acquiring a plurality of sample parking lot images acquired in the initialization area, and a sample vehicle pose and a labeled vehicle pose corresponding to each sample parking lot image;
detecting road characteristics of each sample parking lot image;
determining a reference vehicle pose through model parameters in a pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose;
determining an amount of difference between the reference vehicle pose and the annotated vehicle pose;
when the difference is larger than a preset difference threshold value, correcting the model parameters, returning to execute the road characteristics based on each sample parking lot image and the corresponding sample vehicle pose, and determining the operation of the reference vehicle pose through model parameters in a pose regression model;
and when the difference is not greater than the preset difference threshold value, determining that the pose regression model is trained.
Optionally, the visual starting module is specifically configured to:
performing track speculation on second data acquired by the wheel speed detection equipment based on the first positioning poses to obtain a plurality of vehicle positioning poses;
acquiring a plurality of fourth positioning poses of the vehicle corresponding to the plurality of first road images; wherein the plurality of first road images are images acquired in the initialization area;
determining residuals between the plurality of fourth positioning poses and the plurality of vehicle positioning poses;
and when the residual error is smaller than a preset residual error threshold value, starting visual positioning based on the fourth positioning poses.
Optionally, when the vision starting module determines the residual errors between the multiple fourth positioning poses and the multiple vehicle positioning poses, the method includes:
solving the following function by a least square method to obtain a rigid transformation matrix T between a plurality of fourth positioning poses and a plurality of vehicle positioning poses:
Figure BDA0002144999350000081
substituting the T obtained by solving into
Figure BDA0002144999350000082
Obtaining residual errors between a plurality of fourth positioning poses and a plurality of vehicle positioning poses;
wherein, the
Figure BDA0002144999350000083
Is the ith and the fourthPosition location pose of
Figure BDA0002144999350000084
And positioning the ith vehicle, wherein N is the fourth positioning pose or the total number of the vehicle positioning poses, min is a minimum function, and | | · | | | is a norm symbol.
As can be seen from the above, the restart method after the visual positioning fails and the vehicle-mounted terminal provided in the embodiments of the present invention can perform trajectory estimation on the first data acquired by the inertia measurement unit based on the first positioning pose before the failure when the visual positioning failure of the vehicle in the parking lot is detected, so as to obtain the second positioning pose, and can determine the position of the vehicle according to the second positioning pose. When the position of the vehicle is determined to be in the initialization area in the parking lot, a more accurate third positioning pose of the vehicle is determined through a pose regression model based on the road characteristics and the second positioning pose of the first road image, the road characteristics of the first road image are matched with the road characteristics of each position point in a preset map according to the third positioning pose, and the accuracy of the positioning pose can be further improved according to the matching result. The accuracy of the positioning pose determined in this way can meet the requirement for starting the visual positioning, so that the visual positioning can be restarted after the visual positioning fails. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. when the vehicle runs in the parking lot and the positioning according to the vision is invalid, the pose of the vehicle can be calculated according to the data of the IMU, whether the vehicle runs to an initialization area can be determined according to the calculated positioning pose, and the precision of the positioning pose is further provided through the pose regression model and the matching with the road characteristics in the preset map. This provides a viable solution for restarting the visual alignment after the visual alignment has failed.
2. According to the mapping error between the road characteristics at the current moment in the visual positioning and the corresponding relation between the mapping error and the positioning error in the preset target map area, the positioning error corresponding to the mapping error at the current moment can be determined, and whether the visual positioning fails or not can be determined according to the positioning error. This provides an implementable way of detecting if the visual positioning fails.
3. When the corresponding relation between the mapping error and the positioning error is established, firstly, a sample road characteristic corresponding to an image frame, a road characteristic successfully matched in a preset map and a standard positioning pose corresponding to the image frame are obtained, a plurality of disturbance quantities are added on the basis of the standard positioning pose, and the corresponding relation in the map area is solved and obtained on the basis of the established residual error function. This enables a faster establishment of correspondence in different map regions, and also provides a practical way of determining the positioning error of the vehicle.
4. And performing cross validation on the initialization pose determined by aiming at the multi-frame image in the initialization area and the pose determined according to the wheel speed detection equipment, and judging whether the pose initialization is successful or not, so that whether the pose initialization of the vehicle is successful or not can be judged more accurately, namely whether the accuracy of the determined positioning pose is enough or not is judged.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, other figures can also be derived from these figures.
Fig. 1 is a schematic flowchart of a restart method after a vision positioning failure according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a parking lot ground sign line and an initialization area according to an embodiment of the present invention;
FIG. 3 is a schematic view of a ground image determined from a first road image;
FIG. 4 is a schematic flow chart illustrating a process for detecting whether visual alignment fails according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a restarting method after visual positioning failure and a vehicle-mounted terminal, which can restart the visual positioning after the visual positioning failure. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a flowchart illustrating a restart method after a visual positioning failure according to an embodiment of the present invention. The method is applied to the electronic equipment. The electronic device may be a general Computer, a server, an intelligent terminal device, or the like, or may be a vehicle-mounted Computer or a vehicle-mounted terminal such as an Industrial Personal Computer (IPC). The method specifically comprises the following steps.
S110: when the visual positioning failure of the vehicle in the parking lot is detected, first data collected by the inertia measurement unit are obtained.
When the vehicle runs into a parking lot with weak satellite positioning signals or no signals, in order to accurately determine the positioning pose of the vehicle, a visual positioning mode or a mode of combining the visual positioning and other sensor data positioning can be adopted. The parking lot can be an indoor parking lot or an underground garage and the like.
When the pose of the vehicle is determined, the pose of the vehicle can be determined according to the matching between the road characteristics of the road image acquired by the image acquisition equipment and the road characteristics in the preset map, which can be called as visual positioning. The image capture device may be disposed in a vehicle. When the road characteristics in the image acquired by the image acquisition equipment are few, or the visual positioning cannot be carried out due to reasons such as equipment failure and the like, the failure of the visual positioning is determined.
The preset map may be a pre-established high-precision map. The preset map may include road characteristics of each location point. The position points in the preset map may be represented as two-dimensional coordinate points or three-dimensional coordinate points.
An Inertial Measurement Unit (IMU) may be disposed in the vehicle. The first data may include information of angular velocity, acceleration, and the like of the vehicle. The vehicle in the invention can be understood as an intelligent vehicle, and various sensor devices including an image acquisition device and an IMU can be arranged in the intelligent vehicle. The image capture device and the IMU may each capture data at a certain period.
S120: and calculating the track of the first data based on the first positioning pose of the vehicle before the visual positioning failure to obtain a second positioning pose of the vehicle.
The first positioning pose can be the latest positioning pose before the visual positioning fails, and the second positioning pose calculated by selecting the latest positioning pose is more accurate. The pose includes information such as the position and attitude of the vehicle.
The first positioning pose may be understood as a pose of the vehicle in a preset map. Because the first data collected by the IMU represents the acceleration and angular velocity information of the current moment relative to the previous moment, the first data is subjected to track calculation based on the first positioning pose, and the second positioning pose of the vehicle can be obtained.
When the trajectory estimation is performed on the first data based on the first positioning pose, the following formula may be adopted to determine the second positioning pose of the vehicle:
P 0 (t 2 )=P 0 (t 1 )+R 0 (t 1 )·v(t 1 )(t 2 -t 1 )
R 0 (t 2 )=(t 2 -t 1 )·R 0 (t 1 )·R z (ω(t 2 ))·R y (ω(t 2 ))·R x (ω(t 2 ))
wherein, v (t) 2 )=v(t 1 )+R 0 (t 1 )·a,
Figure BDA0002144999350000111
Figure BDA0002144999350000112
ω(t 2 ) And a is the angular velocity and acceleration in the first data, t, respectively 2 For the moment corresponding to the second positioning pose, t 1 And the time corresponding to the first positioning pose is obtained. And x, y and z are coordinate axes of a coordinate system in which the IMU is positioned. P 0 (t 1 ) And R 0 (t 1 ) Respectively the position and attitude, P, of the vehicle in a first position pose 0 (t 2 ) And R 0 (t 2 ) Respectively the position and the posture of the vehicle in the second positioning pose.
When the visual positioning fails, the pose of the vehicle can be determined by adopting the calculation result of the IMU. The accuracy of this pose may be low, but the position of the vehicle in the preset map can be roughly determined.
S130: and when the position indicated by the second positioning pose is determined to be in a preset initialization area in the parking lot, acquiring a first road image in the parking lot acquired by the image acquisition equipment.
Wherein the first road image is an image acquired in the initialization area. The initialization area is a coordinate area in a preset map, and in the initialization area, observation at any two positions or observation at different angles at the same position have obvious difference. That is, the initialization region contains enough road characteristics to be symbolic. In this initialization area, the position of the vehicle can be accurately determined as an initial positioning position at the time of restart of the visual positioning system. The initialization area may be a circular area having a preset distance as a radius and a preset position point as a center. For example, the preset distance may be 15m or other values.
Referring to fig. 2, fig. 2 is a schematic diagram of a parking lot ground sign line and an initialization area according to an embodiment of the present invention. Where the sign lines of the parking lot floor are shown, as well as the walls of the parking lot passageway (indicated by bold lines), and the initialization area is indicated by a larger circular area. When the vehicle is located at point a, it can be located into a larger circular area according to the estimation result of the IMU. The smaller circle range in fig. 2 represents an initial pose range in which the vision positioning system can be normally started.
In this step, the estimation result of the IMU plays a role in determining that the vehicle has entered the initialization area with the radius of 15m, so that false detection in an area with similar terrain can be avoided. The number of initialization areas in the parking lot may be plural.
Acquiring a first road image in the parking lot acquired by the image acquisition device may be understood as acquiring a first road image acquired by the image acquisition device at a first moment in time associated with determining a second positioning pose, the second positioning pose being a pose capable of indicating that the vehicle is in the initialization area. The associated time instants may be understood as the same time instant or two time instants with a shorter time difference.
S140: and determining a third positioning pose of the vehicle through a pose regression model based on the road characteristics of the first road image and the second positioning pose.
In this step, road features in the first road image may be detected. Road features in the present invention include, but are not limited to: lane lines, light poles, traffic signs, edge lines, stop lines, traffic lights and other markings on the ground on the road. Edge lines include, but are not limited to, lane edge lines and parking space edge lines.
In one embodiment, the step of detecting the road feature in the first road image may specifically include: converting the first road image into a top view coordinate system to obtain a ground image; carrying out binarization processing on the ground image to obtain a processed image; and determining the road characteristics of the first road image according to the information in the processed image.
The ground image may be a grayscale image, among others. When the ground image is subjected to binarization processing, an Otsu method can be adopted to determine a pixel threshold value for distinguishing a foreground part from a background part of the ground image, and the ground image is subjected to binarization processing according to the determined pixel threshold value to obtain a processed image containing the foreground part.
When the road characteristics of the first road image are determined according to the information in the processed image, the processed image can be directly used as the road characteristics, or the relative position information between each marker in the processed image can be used as the road characteristics.
Referring to fig. 3, fig. 3 is a schematic view of a ground image determined from a first road image. The lines are wall lines and lane lines on the ground, and after the ground image is subjected to binarization processing, an image containing road characteristics can be obtained, wherein the road characteristics can be relative positions among various lines and the like.
The pose regression model is obtained by training in advance according to a plurality of sample road images collected in the initialization area, corresponding sample vehicle poses and labeled vehicle poses. The pose regression model can enable the road characteristics of the first road image and the second positioning pose to be associated with the third positioning pose according to the trained model parameters.
The step may specifically include: and inputting the road characteristics and the second positioning pose of the first road image into the pose regression model as input information, and acquiring the positioning pose of the vehicle output by the pose regression model as a third positioning pose. Wherein the third positioning pose is a more accurate vehicle pose than the second positioning pose. The pose regression model can perform regression according to the trained model parameters on the basis of the second positioning pose and the feature vector extracted from the road feature of the first road image to obtain a third positioning pose.
The Pose Regression module may employ a multi-stage Pose regressor (CPR). The multi-stage pose regressor adopts the following principle formula to determine a third positioning pose:
P reg =CPR(P GPS ,I seg )
wherein, P GPS For the second positioning position, I seg Is the road characteristic of the first road image. P GPS And I seg Input information for CPR, P reg And a third positioning pose outputted by the CPR.
The more accurate pose of the vehicle can be determined through the multi-stage pose regressor based on the road characteristics and the second positioning pose, and the positioning pose is more accurate on the basis that the vehicle is determined to enter the initialization area with the radius of 15m in the step S130. This step can also be understood as identifying the position of fig. 3 in fig. 2.
S150: and matching the road characteristics of the first road image with the road characteristics of each position point in the preset map according to the third positioning pose, and determining a fourth positioning pose of the vehicle according to a matching result.
The road characteristics in the first road image are affected by external factors such as occlusion, and therefore deviation or false detection may exist between the third positioning pose and the real vehicle pose. Therefore, the accuracy of the vehicle pose can be further improved through the step.
In this step, after the third positioning pose is obtained, the road features of the first road image may be matched with the road features of each position point in the preset map, and a more accurate fourth positioning pose may be determined according to the position point successfully matched.
S160: and starting visual positioning based on the fourth positioning pose.
And the fourth positioning pose can be understood as a vehicle initial pose which is obtained by positioning in the initialization area and can meet a certain precision requirement. The vehicle initial pose can be used to initiate vision-based vehicle positioning.
As can be seen from the above, in the embodiment, when it is detected that the visual positioning of the vehicle in the parking lot is invalid, the first data acquired by the inertial measurement unit is subjected to trajectory estimation based on the first positioning pose before the invalid, so as to obtain the second positioning pose, and the position of the vehicle can be determined according to the second positioning pose. When the position of the vehicle is determined to be in the initialization area in the parking lot, a more accurate third positioning pose of the vehicle is determined through the pose regression model based on the road characteristics and the second positioning pose of the first road image, the road characteristics of the first road image are matched with the road characteristics of each position point in the preset map according to the third positioning pose, and the accuracy of the positioning pose can be further improved according to the matching result. The accuracy of the positioning pose determined in this way can meet the requirement for starting the visual positioning, so that the visual positioning can be restarted after the visual positioning fails.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, the flowchart shown in fig. 4 may be used to detect whether the visual positioning of the vehicle in the parking lot is disabled, which specifically includes the following steps.
S410: and when the pose of the vehicle to be detected is obtained by positioning the vehicle according to the matching result between the first road feature in the second road image and the road feature pre-established in the preset map, acquiring the second road feature which is successfully matched with the first road feature in the preset map.
Wherein the second road image is an image captured within a parking lot. The second road image may be an image acquired by the image acquisition device when the visual positioning is not disabled, or an image acquired when the visual positioning is disabled. The second road image may be understood as an image acquired before the acquisition of the first road image.
S420: a mapping error between the first road feature and the second road feature is determined.
The first road characteristic is a road characteristic in the road image, and the position in the road image is adopted for representing. The second road characteristic is a road characteristic in the preset map and is represented by coordinates in a coordinate system of the preset map.
When determining the mapping error, the mapping error may be determined after mapping the first road Lu Te and the second road feature into the same coordinate system. The step may specifically include the following embodiments:
according to the first implementation mode, a first mapping position of the first road characteristic mapping to a preset map is calculated according to the pose to be detected and the position of the first road characteristic in the road image; and calculating the error between the first mapping position and the position of the second road characteristic in the preset map to obtain the mapping error.
In this embodiment, the mapping error is obtained by mapping the first road feature to the coordinate system of the preset map and comparing the positions of the first road feature and the second road feature.
When the first road feature is mapped to the first mapping position in the preset map according to the pose to be detected and the position of the first road feature in the road image, the position of the first road feature in the road image can be converted into the world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system and the pose to be detected, so that the first mapping position is obtained. The image coordinate system is a coordinate system where the road image is located, and the world coordinate system is a coordinate system where the preset map is located. The conversion relation between the image coordinate system and the world coordinate system can be obtained through an internal reference matrix between the image coordinate system and the camera coordinate system and a rotation matrix and a translation matrix between the camera coordinate system and the world coordinate system.
According to the pose to be detected and the position of the second road characteristic in the preset map, calculating a second mapping position of the second road characteristic in a coordinate system where the second road characteristic is mapped to the road image; and calculating the error between the position of the first road characteristic in the road image and the second mapping position to obtain the mapping error.
In the present embodiment, the mapping error is obtained by mapping the second road feature into the coordinate system where the road image is located and comparing the positions of the first road feature and the second road feature.
When the second road feature is mapped to the second mapping position in the coordinate system of the road image according to the pose to be detected and the position of the second road feature in the preset map, the position of the second road feature in the preset map can be converted into the image coordinate system according to the conversion relation between the image coordinate system and the world coordinate system and the pose to be detected, so that the second mapping position is obtained.
The two embodiments correspond to two different mapping modes, and can be used alternatively in practical application.
S430: and determining a target map area where the pose to be detected is located from a plurality of different map areas contained in a preset map.
In this embodiment, the preset map may be divided into a plurality of different map regions in advance according to road features included in the preset map, and the road features in each map region have relevance or position proximity. The map area may be a circular area, a rectangular area, or other area shape.
When the target map area is determined, the map area where the position coordinates in the pose to be detected are located can be specifically determined as the target map area.
S440: and determining a first positioning error corresponding to the mapping error according to a corresponding relation between the mapping error and the positioning error in a pre-established target map area, and taking the first positioning error as the positioning precision of the pose to be detected.
In this embodiment, the mapping error and the positioning error in each different region may be pre-established, and the mapping error and the positioning error in the target region may be determined from the mapping error and the positioning error in each different region.
The mapping error function may be a mapping error function with the positioning error as a variable. When the first positioning error corresponding to the mapping error is determined, the mapping error may be substituted into a mapping error function to obtain the first positioning error corresponding to the mapping error.
The positioning error can be understood as a difference value between the current positioning pose and the real positioning pose, and can also represent the precision of the positioning pose. For example, the positioning error may be 5cm, 10cm, or the like, that is, the accuracy indicating the current positioning pose is 5cm, 10cm.
The mapping method used in determining the mapping error in step S420 should be the same as the mapping method used in establishing the corresponding relationship between the mapping error and the positioning error.
S450: and determining whether the visual positioning of the vehicle in the parking lot is invalid or not according to the size relation between the positioning accuracy of the pose to be detected and a preset accuracy threshold.
In this step, it can be specifically determined whether the absolute value of the difference between the positioning accuracy of the pose to be detected and the preset accuracy threshold is greater than the preset difference threshold, and if so, it is determined that the visual positioning of the vehicle in the parking lot is invalid. If not, it is determined that the visual positioning is not disabled.
In another embodiment, in order to more accurately determine whether the visual positioning is failed, the positioning accuracy corresponding to a preset number of consecutive road image frames may be obtained, and when the preset number of positioning accuracies is greater than a preset accuracy threshold, the visual positioning may be determined to be failed.
In summary, in the embodiment, according to the mapping error between the road features at the current time in the visual positioning and the corresponding relationship between the mapping error and the positioning error in the preset target map area, the positioning error corresponding to the mapping error at the current time can be determined, and whether the visual positioning fails or not can be determined according to the positioning error. This provides an implementable way of detecting if the visual positioning fails.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, in step S440, according to a pre-established correspondence between a mapping error and a positioning error in a target map region, a step of determining a first positioning error corresponding to the mapping error includes:
substituting the mapping error cost into the mapping error function g in the target map region established in advance 0 Solving for a plurality of positioning errors (Δ x, Δ y):
g 0 (Δx,Δy)=a 0 Δx 2 +b 0 ΔxΔy+c 0 Δy 2 +d 0 Δx+e 0 Δy+f 0
wherein, a 0 、b 0 、c 0 、d 0 、e 0 、f 0 Is a predetermined function coefficient;
determining the maximum value of the plurality of positioning errors obtained by the solution as a first positioning error r corresponding to the mapping error:
Figure BDA0002144999350000171
wherein the content of the first and second substances,
Figure BDA0002144999350000172
and is
Figure BDA0002144999350000173
Figure BDA0002144999350000174
C=2(a 0 e 0 2 +c 0 d 0 2 +(f 0 -cost)b 0 2 -2b 0 d 0 e 0 -a 0 c 0 (f 0 -cost))。
In this embodiment, the expression forms of the mapping error functions corresponding to different map areas are different, and specifically, the function coefficients may be different. The above mapping error function g 0 (Δx,Δy)=a 0 Δx 2 +b 0 ΔxΔy+c 0 Δy 2 +d 0 Δx+e 0 Δy+f 0 For a paraboloid, the mapping error cost can be understood as a plane, and the mapping error cost is substituted into the mapping error function g 0 Namely, the intersection point of the paraboloid and the plane is obtained. From mathematical knowledge, the intersection point is an ellipse, and the points on the ellipse are all the solved positioning errors (Δ x, Δ y). The maximum value of the positioning errors obtained by the solution is the major axis and the minor axis (x) of the ellipse err And y err )。
In summary, the present embodiment provides a specific implementation manner for determining the first positioning error corresponding to the mapping error according to the mapping error function, and the method is easier to implement in practical applications.
In another embodiment of the present invention, based on the above embodiment, the following steps 1a to 4a can be adopted to establish the corresponding relationship between the mapping error and the positioning error in the target map region:
step 1a: the method comprises the steps of obtaining a sample road image collected in a target map area, corresponding sample road features and a standard positioning pose of a vehicle corresponding to the sample road image, and obtaining third road features which are successfully matched with the sample road features in a preset map.
The standard positioning pose is the positioning pose of the vehicle determined when the image acquisition module acquires the sample road image, and the standard positioning pose can be understood as the positioning pose without positioning errors.
Step 2a: and adding a plurality of different disturbance quantities to the standard positioning pose to obtain a plurality of disturbance positioning poses. The disturbance positioning pose can be understood as a virtual positioning pose of the vehicle obtained by taking the standard positioning pose as a reference.
Step 3a: and determining disturbance mapping errors corresponding to the plurality of disturbance positioning poses according to the sample road characteristics and the third road characteristics.
For different disturbance positioning poses, the disturbance mapping error can be determined after the sample road feature and the third road feature are mapped into the same coordinate system according to the mapping mode mentioned in step S420. This step may include the following embodiments;
for each disturbance positioning pose, calculating a third mapping position of the sample road feature in the preset map according to the disturbance positioning pose and the position of the sample road feature in the sample road image, and calculating an error between the third mapping position and the position of the third road feature in the preset map to obtain a disturbance mapping error; alternatively, the first and second electrodes may be,
and aiming at each disturbance positioning pose, calculating a fourth mapping position of the third road characteristic in a coordinate system where the sample road image is located according to the disturbance positioning pose and the position of the third road characteristic in a preset map, and calculating an error between the fourth mapping position and the position of the sample road characteristic in the sample road image to obtain a disturbance mapping error.
When the road features in the road image, the road features successfully matched in the preset map and the corresponding positioning poses are known, the mapping error match _ err can be represented by the following function:
match_err=MapMatching(p pose ,I seg ,I map )
wherein p is pose For positioning the pose, I seg As road features in road images, I map And the road characteristics which are successfully matched in the preset map are obtained.
Step 4a: and solving a mapping error function when the residual errors between the mapping error function and the disturbance mapping errors corresponding to the disturbance positioning poses take the minimum value based on the preset mapping error function related to the positioning error in the target map region to obtain a functional relation between the mapping error and the positioning error in the target map region.
The mapping error function related to the positioning error in the preset target map region can be understood as a preset mapping error function containing an unknown quantity. For example, the mapping error function may be set to the following quadratic form:
g(Δx,Δy)=aΔx 2 +bΔxΔy+cΔy 2 +dΔx+eΔy+f
the disturbance mapping errors corresponding to a plurality of disturbance positioning poses can be expressed by the following functions:
match_err=MapMatching(p gt +Δp,I seg ,I map )
the step may include, in specific implementation:
solving the following minimum function
Figure BDA0002144999350000191
To obtaina 0 、b 0 、c 0 、d 0 、e 0 And f 0 A obtained by solving 0 、b 0 、c 0 、d 0 、e 0 And f 0 And substituting the function after g as a mapping error function. Under the condition that the standard positioning pose is accurate enough, solving the obtained g 0 Should be parabolic.
Wherein the mapping error function is g (Δ x, Δ y), g (Δ x, Δ y) = a Δ x 2 +bΔxΔy+cΔy 2 +dΔx+eΔy+f;p gt For standard positioning pose, disturbance quantity is delta p = { delta x, delta y,0}, delta x, delta y belongs to omega, omega is a target map area, I seg As a sample road feature, I map A third road characteristic; mapMatching (p) gt +Δp,I seg ,I map ) Locating poses p for multiple perturbations gt + Δ p corresponds to the perturbation mapping error. g (. DELTA.x,. DELTA.y) -MapMatching (p) gt +Δp,I seg ,I map ) And representing the residual error between the mapping error function and the disturbance mapping errors corresponding to the plurality of disturbance positioning poses.
Figure BDA0002144999350000192
The expression is a minimum function taking a, b, c, d, e and f as the quantity to be solved. | | | | is a norm symbol.
For each map area in the preset map, the corresponding mapping error function g can be obtained by solving in the above manner.
To sum up, in this embodiment, when the corresponding relationship between the mapping error and the positioning error is established, a sample road feature corresponding to one image frame and a road feature successfully matched in the preset map and a standard positioning pose corresponding to the image frame are obtained first, a plurality of disturbance amounts are added on the basis of the standard positioning pose, and the corresponding relationship in the map area is obtained by solving based on the established residual function. This enables a faster establishment of correspondence in different map regions, and also provides a practical way of determining the positioning error of the vehicle.
In another embodiment of the invention, based on the embodiment shown in fig. 1, the pose regression model is obtained by training through the following steps 1b to 5 b.
Step 1b: and acquiring a plurality of sample parking lot images acquired in the initialization area, and a sample vehicle pose and an annotated vehicle pose corresponding to each sample parking lot image.
The marked vehicle pose can be understood as a real value and a standard value of the vehicle pose corresponding to the sample parking lot image. The sample vehicle pose may be a vehicle pose determined according to the estimation result of the IMU when each sample parking lot image is acquired, or a vehicle pose obtained by adding a preset disturbance to the marked vehicle pose. A preset perturbation may be understood as a preset modification. The sample vehicle pose can be understood as a vehicle pose initial value for inputting a pose regression model that regresses the sample parking lot image based on the vehicle pose initial value.
In one embodiment, a large number of sample parking lot images may be collected by the image capture device in advance in the initialization area, and a sample vehicle pose determined from the estimation result of the IMU. When each sample parking lot image is collected, the marked vehicle pose corresponding to the sample parking lot image can be determined in an off-line positioning mode.
In another embodiment, the road characteristics and the virtual driving tracks in the preset map can be directly used to simulate the acquisition process of the image acquisition module in the vehicle, and a large number of simulated images are obtained and used as the sample parking lot images. And the marked vehicle pose corresponding to the simulation image can be directly determined according to the preset map.
And step 2b: road characteristics of each sample parking lot image are detected.
The detailed description of this step can be found in the description part of determining the road characteristics of the first road image in step S140.
And step 3b: and determining the pose of the reference vehicle according to model parameters in the pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose.
When the pose regression model adopts a multi-stage pose regressor, model parameters which are already trained in other aspects in the multi-stage pose regressor can be directly used as initial values of the model parameters in the step. Through a large number of training processes, model parameters are continuously corrected to gradually approach true values.
And 4b: an amount of difference between the reference vehicle pose and the annotated vehicle pose is determined. In particular, a residual function may be employed to determine the amount of difference between the reference vehicle pose and the labeled vehicle pose.
And step 5b: and when the difference is larger than the preset difference threshold, modifying the model parameters, and returning to execute the step 3b. And when the difference is not greater than a preset difference threshold value, determining that the pose regression model is trained.
The preset difference threshold is a value set empirically in advance. And when the difference is larger than the preset difference threshold, the model is considered to need to be trained continuously. When the model parameter is corrected, the model parameter may be corrected based on the difference amount. For example, the model parameters may be corrected based on the difference amount and the trend of change obtained as compared with the difference amount of the last training process.
In summary, the embodiment provides a specific implementation manner for training the pose regression model, which can improve the accuracy of the pose regression model, and further improve the accuracy of positioning.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, step S160, the step of starting the visual positioning based on the fourth positioning pose may specifically include steps 1c to 4c.
Step 1c: and based on the first positioning pose, performing track speculation on second data acquired by the wheel speed detection equipment to obtain a plurality of vehicle positioning poses.
Wherein the vehicle speed is obtained based on the speed of each wheel of the vehicle detected by the wheel speed detecting device. The second data may be understood as vehicle speeds corresponding to a plurality of time instants.
When the track estimation is carried out on the second data acquired by the wheel speed detection equipment based on the first positioning pose, the determination can be carried out based on the first positioning pose, the second data and the first dataAnd (5) positioning the pose of the vehicle. Specifically, the vehicle speed in the second data may be regarded as v (t) 1 ) A plurality of vehicle positioning poses are determined in a manner that the second positioning pose is determined in step S120.
And step 2c: and acquiring a plurality of fourth positioning poses of the vehicles corresponding to the plurality of first road images. Wherein the plurality of first road images are images acquired in the initialization area.
Wherein, the plurality of first road images can be understood as a plurality of images in the image frame acquired by the image acquisition device.
Specifically, a plurality of fourth positioning poses may be determined according to the preset initial positioning frequency by using steps S110 to S150 in fig. 1, and each fourth positioning pose may be stored in the preset storage space. And when the fourth positioning poses of the vehicles corresponding to the plurality of first road images are obtained, obtaining the fourth positioning poses from a preset storage space.
And step 3c: residual errors between the plurality of fourth positioning poses and the plurality of vehicle positioning poses are determined.
The step may specifically include: and determining the residual errors between the fourth positioning poses corresponding to one another and the vehicle positioning poses. The determined residual may be the sum of the residuals between each fourth positioning pose and the corresponding vehicle positioning pose, or may be a residual vector composed of the residuals between each fourth positioning pose and the corresponding vehicle positioning pose.
And 4c: and when the residual error is smaller than a preset residual error threshold value, starting visual positioning based on a plurality of fourth positioning poses. The preset residual threshold may be a value empirically determined in advance. And when the residual error is smaller than the preset residual error threshold value, the fourth positioning pose is considered to be accurate enough, and the visual positioning can be started based on the fourth positioning pose.
When the visual positioning is started based on the plurality of fourth positioning poses, the visual positioning may be specifically started based on the latest fourth positioning pose.
In summary, in the embodiment, the initialization pose determined for the multi-frame image in the initialization area and the pose determined by the wheel speed detection device are cross-verified to determine whether the pose initialization is successful, so that whether the pose initialization of the vehicle is successful can be more accurately determined, that is, whether the accuracy of the determined positioning pose is sufficient.
In another embodiment of the present invention, in step 3c, the step of determining residuals between the plurality of fourth positioning poses and the plurality of vehicle positioning poses may specifically include:
solving the following functions by a least square method to obtain rigid transformation matrixes T between a plurality of fourth positioning poses and a plurality of vehicle positioning poses, substituting the solved T into
Figure BDA0002144999350000221
Obtaining residuals between a plurality of fourth positioning poses and a plurality of vehicle positioning poses:
Figure BDA0002144999350000222
wherein the content of the first and second substances,
Figure BDA0002144999350000223
for the ith fourth positioning pose,
Figure BDA0002144999350000224
and positioning the ith vehicle, wherein N is the fourth positioning pose or the total number of the vehicle positioning poses, min is a minimum function, and | l | · | | is a norm symbol.
In the initialization area, the fourth positioning pose determined according to the plurality of first road images can adopt the first track
Figure BDA0002144999350000225
Indicating that a plurality of vehicle positioning poses determined from data collected by the wheel speed detection apparatus can adopt the second trajectory
Figure BDA0002144999350000226
Figure BDA0002144999350000227
And (4) showing. Solving for
Figure BDA0002144999350000228
The formula may be understood as determining the minimum transformation amount when transforming the first trajectory to the second trajectory. Computing trace init And trace odom The residual magnitude of each term in (1), i.e. calculating
Figure BDA0002144999350000229
And obtaining the matching degree of the two tracks, and determining that the initial positioning is successful when the matching degree is greater than a preset matching degree threshold. The present embodiment can determine the residual error more accurately.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, step S150 is a step of matching the road features of the first road image with the road features of each position point in the preset map according to the third positioning pose, and determining the fourth positioning pose of the vehicle according to the matching result, which may specifically include steps 1d to 4d.
Step 1d: and matching the road characteristics of the first road image with the road characteristics of each position point in the preset map to obtain a fourth road characteristic which is successfully matched in the preset map.
Step 2d: and determining a reference mapping error between the road characteristic of the first road image and the fourth road characteristic according to the value of the estimated pose by taking the third positioning pose as the initial value of the estimated pose.
In this step, when determining the reference mapping error, the reference mapping error between the road feature of the first road image and the fourth road feature may be determined after mapping the road feature and the fourth road feature to the same coordinate system with reference to one of the two mapping manners provided in step S420.
And step 3d: and when the reference mapping error is larger than a preset error threshold value, adjusting the estimated pose of the vehicle, and executing the step 2d of determining the reference mapping error between the road characteristic of the first road image and the fourth road characteristic according to the value of the estimated pose.
When the reference mapping error is larger than a preset error threshold value, a large difference is considered to exist between the estimated pose and the real positioning pose of the vehicle, and iteration can be continued.
And 4d: and when the reference mapping error is not greater than the preset error threshold, determining a fourth positioning pose of the vehicle according to the current estimation pose of the vehicle.
And when the reference mapping error is not greater than the preset error threshold, the estimated pose is considered to be very close to the real positioning pose of the vehicle, and the positioning accuracy meets the requirement.
In summary, the embodiment provides a method for determining the positioning pose of the vehicle in an iterative manner based on the matching result between the road features of the road image and the road features in the preset map, so that the positioning pose of the vehicle can be determined more accurately.
Fig. 5 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. The vehicle-mounted terminal includes: a processor 510, an image acquisition device 520, and an inertial measurement unit 530. The processor 510 includes: the system comprises a data acquisition module, a pose calculation module, an image acquisition module, a first determination module, a second determination module and a vision starting module; (not shown in the figure)
The data acquisition module is used for acquiring first data acquired by the inertia measurement unit when the visual positioning failure of the vehicle in the parking lot is detected;
the pose calculation module is used for calculating a track of the first data based on a first positioning pose of the vehicle before the visual positioning failure to obtain a second positioning pose of the vehicle;
the image acquisition module is used for acquiring a first road image in the parking lot, which is acquired by the image acquisition equipment, when the position indicated by the second positioning pose is determined to be in a preset initialization area in the parking lot; the first road image is an image collected in the initialization area;
the first determining module is used for determining a third positioning pose of the vehicle through the pose regression model based on the road characteristics of the first road image and the second positioning pose; the pose regression model is obtained by training in advance according to a plurality of sample road images collected in the initialization area, corresponding sample vehicle poses and labeled vehicle poses;
the second determining module is used for matching the road characteristics of the first road image with the road characteristics of each position point in the preset map according to the third positioning pose and determining the fourth positioning pose of the vehicle according to the matching result;
and the visual starting module is used for starting visual positioning based on the fourth positioning pose.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the processor 510 further includes: a failure detection module for detecting whether visual positioning of the vehicle within the parking lot is failed using:
when the vehicle is positioned according to the matching result between the first road feature in the second road image and the road feature pre-established in the preset map to obtain the position and posture to be detected of the vehicle, acquiring a second road feature which is successfully matched with the first road feature in the preset map; the second road image is an image collected in a parking lot;
determining a mapping error between the first road characteristic and the second road characteristic;
determining a target map area where a pose to be detected is located from a plurality of different map areas contained in a preset map;
determining a first positioning error corresponding to a mapping error according to a corresponding relation between the mapping error and the positioning error in a pre-established target map area, and taking the first positioning error as the positioning precision of the pose to be detected;
and determining whether the visual positioning of the vehicle in the parking lot is invalid or not according to the size relation between the positioning precision of the pose to be detected and a preset precision threshold.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, when the failure detection module determines the first positioning error corresponding to the mapping error according to the correspondence between the mapping error and the positioning error in the pre-established target map area, the method includes:
substituting the mapping error cost into the mapping error function g in the target map region established in advance below 0 Solving for a plurality of positioning errors (Δ x, Δ y):
g 0 (Δx,Δy)=a 0 Δx 2 +b 0 ΔxΔy+c 0 Δy 2 +d 0 Δx+e 0 Δy+f 0
wherein, a 0 、b 0 、c 0 、d 0 、e 0 、f 0 Is a predetermined function coefficient;
determining the maximum value of the plurality of positioning errors obtained by solving as a first positioning error r corresponding to the mapping error:
Figure BDA0002144999350000251
wherein the content of the first and second substances,
Figure BDA0002144999350000252
and is
Figure BDA0002144999350000253
Figure BDA0002144999350000254
C=2(a 0 e 0 2 +c 0 d 0 2 +(f 0 -cost)b 0 2 -2b 0 d 0 e 0 -a 0 c 0 (f 0 -cost))。
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the processor 510 includes: the relation establishing module is used for establishing the corresponding relation between the mapping error and the positioning error in the target map area by adopting the following operations:
acquiring a sample road image and corresponding sample road characteristics acquired in a target map area, and a standard positioning pose of a vehicle corresponding to the sample road image, and acquiring third road characteristics successfully matched with the sample road characteristics in a preset map;
adding a plurality of different disturbance amounts to the standard positioning poses to obtain a plurality of disturbance positioning poses;
determining disturbance mapping errors corresponding to a plurality of disturbance positioning poses according to the sample road characteristics and the third road characteristics;
and solving a mapping error function when the residual errors between the mapping error function and the disturbance mapping errors corresponding to the plurality of disturbance positioning poses take the minimum value based on the preset mapping error function related to the positioning errors in the target map region to obtain the functional relation between the mapping errors and the positioning errors in the target map region.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, when the relationship establishing module solves the mapping error function when the residual errors between the mapping error function and the perturbation mapping errors corresponding to the multiple perturbation positioning poses take the minimum value, the method includes:
solving the following minimum function
Figure BDA0002144999350000255
To obtain a 0 、b 0 、c 0 、d 0 、e 0 And f 0 A obtained by solving 0 、b 0 、c 0 、d 0 、e 0 And f 0 Substituting the function into g to be used as a mapping error function;
wherein the mapping error function is g (Δ x, Δ y), g (Δ x, Δ y) = a Δ x 2 +bΔxΔy+cΔy 2 +dΔx+eΔy+f;p gt For standard positioning pose, disturbance quantity is delta p = { delta x, delta y,0}, delta x, delta y belongs to omega, omega is a target map area, I seg As a sample road feature, I map A third road characteristic; mapMatching (p) gt +Δp,I seg ,I map ) Locating poses p for multiple perturbations gt + Δ p corresponds to the perturbation mapping error.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the processor 510 further includes: the model training module is used for training to obtain a pose regression model by adopting the following operations:
acquiring a plurality of sample parking lot images acquired in an initialization area, and a sample vehicle pose and a labeled vehicle pose corresponding to each sample parking lot image;
detecting road characteristics of each sample parking lot image;
determining a reference vehicle pose through model parameters in a pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose;
determining an amount of difference between the reference vehicle pose and the annotated vehicle pose;
when the difference is larger than a preset difference threshold value, correcting the model parameters, returning to execute the operation of determining the pose of the reference vehicle according to the road characteristics of each sample parking lot image and the corresponding sample vehicle pose and the model parameters in the pose regression model;
and when the difference is not greater than a preset difference threshold value, determining that the pose regression model is trained.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the visual starting module is specifically configured to:
based on the first positioning pose, track estimation is carried out on second data acquired by the wheel speed detection equipment to obtain a plurality of vehicle positioning poses;
acquiring a plurality of fourth positioning poses of the vehicle corresponding to the plurality of first road images; the first road images are acquired in the initialization area;
determining residuals between the plurality of fourth positioning poses and the plurality of vehicle positioning poses;
and when the residual error is smaller than a preset residual error threshold value, starting visual positioning based on a plurality of fourth positioning poses.
In another embodiment of the present invention, based on the embodiment shown in fig. 5, the determining, by the vision-enabling module, residuals between the fourth plurality of positioning poses and the vehicle positioning poses includes:
solving the following function by a least square method to obtain a rigid transformation matrix T between a plurality of fourth positioning poses and a plurality of vehicle positioning poses:
Figure BDA0002144999350000271
substituting the solved T into
Figure BDA0002144999350000272
Obtaining residual errors between the fourth positioning poses and the vehicle positioning poses;
wherein the content of the first and second substances,
Figure BDA0002144999350000273
for the ith fourth positioning pose,
Figure BDA0002144999350000274
and positioning the ith vehicle, wherein N is the fourth positioning pose or the total number of the vehicle positioning poses, min is a minimum function, and | l | · | | is a norm symbol.
The terminal embodiment and the method embodiment shown in fig. 1 are embodiments based on the same inventive concept, and the relevant points can be referred to each other. The terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, reference is made to the method embodiment.
Those of ordinary skill in the art will understand that: the figures are schematic representations of one embodiment, and the blocks or processes shown in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for restarting a vision-positioning failed, comprising:
when the visual positioning failure of the vehicle in the parking lot is detected, acquiring first data acquired by an inertia measurement unit;
performing track calculation on the first data based on a first positioning pose of the vehicle before the visual positioning fails to obtain a second positioning pose of the vehicle;
when the position indicated by the second positioning pose is determined to be in a preset initialization area in the parking lot, acquiring a first road image in the parking lot acquired by image acquisition equipment; wherein the first road image is an image collected in the initialization area;
determining a third positioning pose of the vehicle through a pose regression model based on the road characteristics of the first road image and the second positioning pose; the pose regression model is obtained by training in advance according to a plurality of sample road images collected in the initialization area, corresponding sample vehicle poses and labeled vehicle poses;
according to the third positioning pose, matching the road characteristics of the first road image with the road characteristics of each position point in a preset map, and determining a fourth positioning pose of the vehicle according to a matching result;
and starting visual positioning based on the fourth positioning pose.
2. The method of claim 1, wherein the visual localization of the vehicle within the parking lot is detected as a failure by:
when a vehicle is positioned according to a matching result between a first road feature in a second road image and a road feature pre-established in a preset map to obtain a position and posture to be detected of the vehicle, acquiring a second road feature which is successfully matched with the first road feature in the preset map; wherein the second road image is an image collected in a parking lot;
determining a mapping error between the first road feature and the second road feature;
determining a target map area where the pose to be detected is located from a plurality of different map areas contained in the preset map;
determining a first positioning error corresponding to the mapping error according to a corresponding relation between the mapping error and the positioning error in a pre-established target map area, and taking the first positioning error as the positioning precision of the pose to be detected;
and determining whether the visual positioning of the vehicle in the parking lot is invalid or not according to the size relation between the positioning precision of the pose to be detected and a preset precision threshold.
3. The method of claim 2, wherein the step of determining the first positioning error corresponding to the mapping error according to the pre-established correspondence between the mapping error and the positioning error in the target map region comprises:
substituting the mapping error cost into a mapping error function g in a target map region established in advance below 0 Solving for a plurality of positioning errors (Δ x, Δ y):
g 0 (Δx,Δy)=a 0 Δx 2 +b 0 ΔxΔy+c 0 Δy 2 +d 0 Δx+e 0 Δy+f 0
wherein, the a 0 、b 0 、c 0 、d 0 、e 0 、f 0 Is a predetermined function coefficient;
determining the maximum value of the plurality of positioning errors obtained by the solution as a first positioning error r corresponding to the mapping error:
Figure FDA0003929855490000021
wherein the content of the first and second substances,
Figure FDA0003929855490000022
and is
Figure FDA0003929855490000023
Figure FDA0003929855490000024
C=2(a 0 e 0 2 +c 0 d 0 2 +(f 0 -cost)b 0 2 -2b 0 d 0 e 0 -a 0 c 0 (f 0 -cost))。
4. A method according to claim 2 or 3, characterized in that the correspondence between mapping errors and positioning errors in the target map area is established in the following way:
acquiring a sample road image and corresponding sample road features acquired in the target map area, and a standard positioning pose of the vehicle corresponding to the sample road image, and acquiring third road features successfully matched with the sample road features in the preset map;
adding a plurality of different disturbance quantities to the standard positioning pose to obtain a plurality of disturbance positioning poses;
determining disturbance mapping errors corresponding to a plurality of disturbance positioning poses according to the sample road characteristics and the third road characteristics;
and solving a mapping error function when the residual errors between the mapping error function and the disturbance mapping errors corresponding to the plurality of disturbance positioning poses take the minimum value based on a preset mapping error function related to the positioning errors in the target map area to obtain a functional relation between the mapping errors and the positioning errors in the target map area.
5. The method of claim 4, wherein solving the mapping error function when the residuals between the mapping error function and the perturbation mapping errors corresponding to the plurality of perturbation positioning poses take a minimum value comprises:
solving the following minimum function
Figure FDA0003929855490000031
To obtain a 0 、b 0 、c 0 、d 0 、e 0 And f 0 A obtained by solving 0 、b 0 、c 0 、d 0 、e 0 And f 0 Substituting the function into g as a mapping error function;
wherein the mapping error function is g (Δ x, Δ y), g (Δ x, Δ y) = a Δ x 2 +bΔxΔy+cΔy 2 + d Δ x + e Δ y + f; said p is gt For the standard positioning pose, the disturbance quantity is delta p = { delta x, delta y,0}, delta x, delta y is equal to omega, the omega is the target map area, and the I is seg As the sample road characteristics, the map Is the third road characteristic; the MapMatching (p) gt +Δp,I seg ,I map ) Locating poses p for multiple perturbations gt + Δ p corresponds to the perturbation mapping error.
6. The method of claim 1, wherein the pose regression model is trained by:
acquiring a plurality of sample parking lot images acquired in the initialization area, and a sample vehicle pose and a labeled vehicle pose corresponding to each sample parking lot image;
detecting road characteristics of each sample parking lot image;
determining a reference vehicle pose through model parameters in a pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose;
determining an amount of difference between the reference vehicle pose and the annotated vehicle pose;
when the difference is larger than a preset difference threshold value, correcting the model parameters, returning to execute the step of determining the reference vehicle pose through model parameters in a pose regression model based on the road characteristics of each sample parking lot image and the corresponding sample vehicle pose;
and when the difference is not greater than the preset difference threshold value, determining that the pose regression model is trained.
7. The method of claim 1, wherein the step of initiating a visual positioning based on the fourth positioning pose comprises:
performing track speculation on second data acquired by the wheel speed detection equipment based on the first positioning poses to obtain a plurality of vehicle positioning poses;
acquiring a plurality of fourth positioning poses of the vehicle corresponding to the plurality of first road images; wherein the plurality of first road images are images acquired in the initialization area;
determining residuals between the plurality of fourth positioning poses and the plurality of vehicle positioning poses;
and when the residual error is smaller than a preset residual error threshold value, starting visual positioning based on the fourth positioning poses.
8. The method of claim 7, wherein the step of determining residuals between the fourth plurality of positioning poses and the vehicle positioning poses comprises:
solving the following function by a least square method to obtain a rigid transformation matrix T between a plurality of fourth positioning poses and a plurality of vehicle positioning poses:
Figure FDA0003929855490000041
substituting the solved rigid transformation matrix T into
Figure FDA0003929855490000042
Obtaining residual errors between the fourth positioning poses and the vehicle positioning poses;
wherein, the
Figure FDA0003929855490000043
For the ith fourth positioning pose, the
Figure FDA0003929855490000044
And positioning the ith vehicle, wherein N is the fourth positioning pose or the total number of the vehicle positioning poses, min is a minimum function, and | | · | | | is a norm symbol.
9. A vehicle-mounted terminal characterized by comprising: the system comprises a processor, image acquisition equipment and an inertia measurement unit; the processor includes: the system comprises a data acquisition module, a pose calculation module, an image acquisition module, a first determination module, a second determination module and a vision starting module;
the data acquisition module is used for acquiring first data acquired by the inertia measurement unit when the visual positioning failure of the vehicle in the parking lot is detected;
the pose calculation module is used for calculating a track of the first data based on a first positioning pose of the vehicle before the visual positioning failure to obtain a second positioning pose of the vehicle;
the image acquisition module is used for acquiring a first road image in the parking lot, which is acquired by the image acquisition equipment, when the position indicated by the second positioning pose is determined to be in a preset initialization area in the parking lot; wherein the first road image is an image collected in the initialization area;
a first determination module, configured to determine, based on the road characteristics of the first road image and the second positioning pose, a third positioning pose of the vehicle through a pose regression model; the pose regression model is obtained by training in advance according to a plurality of sample road images collected in the initialization area, corresponding sample vehicle poses and labeled vehicle poses;
the second determining module is used for matching the road characteristics of the first road image with the road characteristics of each position point in a preset map according to the third positioning pose and determining a fourth positioning pose of the vehicle according to a matching result;
and the visual starting module is used for starting visual positioning based on the fourth positioning pose.
10. The terminal of claim 9, wherein the processor further comprises: a failure detection module for detecting whether visual positioning of the vehicle within the parking lot is failed using:
when a vehicle is positioned according to a matching result between a first road feature in a second road image and a road feature pre-established in a preset map to obtain a position and posture to be detected of the vehicle, acquiring a second road feature which is successfully matched with the first road feature in the preset map; wherein the second road image is an image collected in a parking lot;
determining a mapping error between the first road feature and the second road feature;
determining a target map area where the pose to be detected is located from a plurality of different map areas contained in the preset map;
determining a first positioning error corresponding to the mapping error according to a corresponding relation between the mapping error and the positioning error in a pre-established target map area, and taking the first positioning error as the positioning precision of the pose to be detected;
and determining whether the visual positioning of the vehicle in the parking lot is invalid or not according to the size relation between the positioning precision of the pose to be detected and a preset precision threshold.
CN201910681733.9A 2019-07-26 2019-07-26 Restarting method after visual positioning failure and vehicle-mounted terminal Active CN112304322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910681733.9A CN112304322B (en) 2019-07-26 2019-07-26 Restarting method after visual positioning failure and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910681733.9A CN112304322B (en) 2019-07-26 2019-07-26 Restarting method after visual positioning failure and vehicle-mounted terminal

Publications (2)

Publication Number Publication Date
CN112304322A CN112304322A (en) 2021-02-02
CN112304322B true CN112304322B (en) 2023-03-14

Family

ID=74328772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910681733.9A Active CN112304322B (en) 2019-07-26 2019-07-26 Restarting method after visual positioning failure and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN112304322B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113218385B (en) * 2021-05-24 2022-05-27 周口师范学院 High-precision vehicle positioning method based on SLAM
CN114385934A (en) * 2022-03-23 2022-04-22 北京悉见科技有限公司 System for jointly inquiring multiple AR maps

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016197390A (en) * 2015-04-03 2016-11-24 株式会社デンソー Start-up suggestion device and start-up suggestion method
WO2017161588A1 (en) * 2016-03-25 2017-09-28 华为技术有限公司 Positioning method and apparatus
CN109255817A (en) * 2018-09-14 2019-01-22 北京猎户星空科技有限公司 A kind of the vision method for relocating and device of smart machine
CN109544615A (en) * 2018-11-23 2019-03-29 深圳市腾讯信息技术有限公司 Method for relocating, device, terminal and storage medium based on image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018090308A1 (en) * 2016-11-18 2018-05-24 Intel Corporation Enhanced localization method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016197390A (en) * 2015-04-03 2016-11-24 株式会社デンソー Start-up suggestion device and start-up suggestion method
WO2017161588A1 (en) * 2016-03-25 2017-09-28 华为技术有限公司 Positioning method and apparatus
CN109255817A (en) * 2018-09-14 2019-01-22 北京猎户星空科技有限公司 A kind of the vision method for relocating and device of smart machine
CN109544615A (en) * 2018-11-23 2019-03-29 深圳市腾讯信息技术有限公司 Method for relocating, device, terminal and storage medium based on image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"光学与深度特征融合在机器人场景定位中的应用";刘冰等;《东南大学学报(自然科学版)》;20130720;第43卷;第188-191页 *

Also Published As

Publication number Publication date
CN112304322A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112304302B (en) Multi-scene high-precision vehicle positioning method and device and vehicle-mounted terminal
CN110869700B (en) System and method for determining vehicle position
CN112102646B (en) Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal
CN112734852B (en) Robot mapping method and device and computing equipment
CN112307810B (en) Visual positioning effect self-checking method and vehicle-mounted terminal
CN111912416B (en) Method, device and equipment for positioning equipment
JP2020064056A (en) Device and method for estimating position
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
CN111524169A (en) Localization based on image registration of sensor data and map data with neural networks
CN112304322B (en) Restarting method after visual positioning failure and vehicle-mounted terminal
US20230108621A1 (en) Method and system for generating visual feature map
CN113137973A (en) Image semantic feature point truth value determining method and device
CN116184430B (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
WO2021063756A1 (en) Improved trajectory estimation based on ground truth
CN111862146B (en) Target object positioning method and device
CN109459046B (en) Positioning and navigation method of suspension type underwater autonomous vehicle
CN112304321B (en) Vehicle fusion positioning method based on vision and IMU and vehicle-mounted terminal
CN112284399B (en) Vehicle positioning method based on vision and IMU and vehicle-mounted terminal
CN117649619B (en) Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium
CN114882727B (en) Parking space detection method based on domain controller, electronic equipment and storage medium
CN115205828B (en) Vehicle positioning method and device, vehicle control unit and readable storage medium
WO2022179047A1 (en) State information estimation method and apparatus
KR20230108997A (en) Method for visual localization, control server and buiding using the same
Abdu et al. Robust Monocular Visual Odometry Trajectory Estimation in Urban Environments
CN117953074A (en) On-line sensor alignment using feature registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220304

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: 100083 room 28, 4 / F, block a, Dongsheng building, 8 Zhongguancun East Road, Haidian District, Beijing

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant