CN112150550A - Fusion positioning method and device - Google Patents

Fusion positioning method and device Download PDF

Info

Publication number
CN112150550A
CN112150550A CN202011007686.9A CN202011007686A CN112150550A CN 112150550 A CN112150550 A CN 112150550A CN 202011007686 A CN202011007686 A CN 202011007686A CN 112150550 A CN112150550 A CN 112150550A
Authority
CN
China
Prior art keywords
pose information
predicted
repositioning
predicted pose
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011007686.9A
Other languages
Chinese (zh)
Other versions
CN112150550B (en
Inventor
丁磊
戴必林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Autopilot Technology Co Ltd
Original Assignee
Human Horizons Shanghai Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Autopilot Technology Co Ltd filed Critical Human Horizons Shanghai Autopilot Technology Co Ltd
Priority to CN202011007686.9A priority Critical patent/CN112150550B/en
Publication of CN112150550A publication Critical patent/CN112150550A/en
Priority to PCT/CN2021/084792 priority patent/WO2022062355A1/en
Application granted granted Critical
Publication of CN112150550B publication Critical patent/CN112150550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Abstract

The embodiment of the application provides a fusion positioning method and a fusion positioning device, wherein the fusion positioning method comprises the following steps: acquiring pictures related to the surrounding environment, and acquiring repositioning pose information based on the pictures related to the surrounding environment; fusing at least one piece of predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information; updating the predicted pose information queue based on the fused pose information; and obtaining target positioning pose information based on the updated predicted pose information queue. The method and the device can realize high-precision and high-robustness positioning by fusing multiple positioning methods.

Description

Fusion positioning method and device
Technical Field
The application relates to the field of positioning, in particular to a fusion positioning method based on semantic and angular point information.
Background
With the rapid development of the automatic driving technology, the positioning function is almost an essential function for automatic driving. Currently, the automatic driving is usually located by using a Global Positioning System (GPS). However, in the indoor area and the area with relatively dense buildings, the GPS has poor or even no work, and therefore, how to continuously provide stable and high-precision positioning information for the vehicle or other movable equipment becomes a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a fusion positioning method and a fusion positioning device, which are used for solving the problems in the related technology, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a fusion positioning method, including:
acquiring pictures related to the surrounding environment, and acquiring repositioning pose information based on the pictures related to the surrounding environment;
fusing at least one piece of predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information;
updating the predicted pose information queue based on the fused pose information;
and obtaining target positioning pose information based on the updated predicted pose information queue.
In a second aspect, an embodiment of the present application provides a fusion positioning apparatus, including:
the repositioning module is used for acquiring pictures related to the surrounding environment and acquiring repositioning pose information based on the pictures related to the surrounding environment;
the fusion module is used for fusing at least one piece of predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information;
the updating module is used for updating the predicted pose information queue based on the fused pose information;
and the pose determining module is used for obtaining target positioning pose information based on the updated predicted pose information queue.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of fused positioning.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when executed on a computer, perform a method in any one of the above-described aspects.
The advantages or beneficial effects in the above technical solution at least include: according to the invention, based on the pictures related to the surrounding environment, the repositioning pose information is obtained by different positioning methods, so that the repositioning pose information can be ensured to be obtained in different environments, and high robustness is ensured; the invention also updates the forecast pose information queue by using the repositioning pose information, obtains the target positioning pose information from the updated forecast pose information queue, and finally obtains the target positioning pose information by fusing the repositioning pose information based on the environment and the forecast positioning pose information, thereby realizing the high-precision positioning of the vehicle or other movable machine equipment.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 is a schematic diagram of a fusion localization method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a fusion positioning method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of a fusion localization method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of corner extraction according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a fusion localization method according to another embodiment of the present application;
FIG. 6 is a schematic diagram of a repositioning pose and prediction pose queue association according to another embodiment of the present application;
FIG. 7 is a block diagram of a fusion positioning device according to an embodiment of the present application;
FIG. 8 is a block diagram of a relocation module in a fusion positioning apparatus according to another embodiment of the present application;
FIG. 9 is a block diagram of an electronic device for implementing the fusion positioning method of the embodiments of the present application;
FIG. 10 is a block diagram of a fusion positioning device according to another embodiment of the present application;
fig. 11 is a block diagram illustrating a configuration of a fusion module in a fusion positioning apparatus according to another embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Fig. 1 shows a flow chart of a fusion positioning method according to an embodiment of the present application. As shown in fig. 1, the fusion localization method may include:
step S110: acquiring pictures related to the surrounding environment, and acquiring repositioning pose information based on the pictures related to the surrounding environment;
step S120: fusing at least one piece of predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information;
step S130: updating the predicted pose information queue based on the fused pose information;
step S140: and obtaining target positioning pose information based on the updated predicted pose information queue.
Alternatively, the fusion positioning method can be used for vehicles, and can also be used for movable devices or equipment such as robots and the like which need to be positioned at any time.
In one embodiment, the surrounding-related picture may be a forward-view picture for corner positioning or a surround-view picture for semantic positioning. And then extracting corresponding information in the picture, and obtaining repositioning position and orientation information by adopting different positioning methods based on different pictures, wherein the repositioning position and orientation information specifically comprises repositioning position coordinates, repositioning direction angles and repositioning time, wherein the repositioning time is acquisition time corresponding to the picture related to the surrounding environment.
After the repositioning pose information is obtained, as shown in fig. 2, the repositioning pose information is associated with at least one piece of predicted pose information in the predicted pose information queue to obtain fused pose information, and the selected predicted pose information can also be called as a temporary object; in a specific embodiment, the predicted pose information queue comprises a plurality of predicted pose information calculated by using a motion model at different time instants, wherein the motion model is a model capable of using t1The pose (including position coordinates and direction angles), calculation parameters and movement time t' of the moment are calculated1And a model of the pose (including the position coordinates and the direction angle) at the moment + t', and each piece of predicted pose information can comprise predicted position coordinates, a predicted direction angle and a corresponding predicted positioning time. And finding at least one piece of predicted pose information close to the repositioning time according to the repositioning time in the repositioning pose information, and fusing the repositioning pose with the repositioning time to obtain fused pose information. Then, based on the fused pose information, updating a predicted pose information queue, specifically, based on the predicted positioning time of the fused pose information, using a motion model and the fused pose informationAnd recalculating the predicted pose information of the time in the predicted pose information queue after the predicted positioning time, and updating the predicted pose information queue.
And obtaining target positioning pose information based on the updated predicted pose information queue, for example, finding a newly added predicted pose information in the predicted pose information based on the updated predicted pose information queue, and issuing the newly added predicted pose information as a target positioning pose. In the embodiment, the repositioning pose information can be obtained based on different positioning methods, then the repositioning pose information is fused with the prediction pose information, the prediction pose information queue is updated, the final target positioning pose is determined based on the updated prediction pose information queue, the accuracy of the fused positioning pose information is higher, and the high precision of the fused positioning method is ensured; different positioning methods can be matched with different motion running environments, relocation pose information can be obtained no matter under any environment, and stability and high robustness of the fusion positioning method are guaranteed.
Fig. 3 is a flowchart illustrating an implementation of obtaining repositioning pose information in a fusion positioning method according to an embodiment of the present application. As shown in fig. 3, in some embodiments, the process of acquiring relocation pose information in step S110 includes:
step 210: collecting a forward-looking picture under the condition of initial positioning;
step 220: based on the corner points in the forward-looking picture, the position and posture information based on the corner points is obtained by matching with a corner point map, and the position and posture information based on the corner points is used as the repositioning position and posture information.
Alternatively, the initial positioning may be a first positioning during driving or moving, or a second positioning after a longer period of time.
In an embodiment, under the condition of judging as initial positioning, a forward-looking picture is collected, and a corner in the forward-looking picture is extracted, optionally, a Fast method can be used for corner extraction, the Fast method mainly focuses on 16 pixel corners on a circular window near a pixel point, specifically, as shown in fig. 4, p is a central pixel point, a point pixel marked by a white frame is the 16 pixel corners needing to be extracted, and the 16 pixel corners are described by Brief description. Then, matching the extracted corner points with a corner point map to obtain a plurality of candidate key frames, for example, searching 4 matching key frames with the highest score in the corner point map by using a Bow dictionary based on a Brief descriptor, wherein the step can also be called as violent matching; after a plurality of candidate key frames are obtained, an optimal key frame is selected from the key frames, whether matching is successful or not is determined based on matching corners in the optimal key frame, for example, a corner map frame with the highest score and the score higher than a threshold value is selected, corners in the map and current corners are matched by using a hamming distance, and matching is successful when the minimum hamming distance of matching is smaller than the threshold value. And finally, obtaining repositioning pose information based on the corner map successfully matched, specifically, obtaining matching of 2d corners extracted from the forward-looking picture and 3d corners in the corner map because the corners in the corner map have 3d positions, and finally obtaining repositioning pose information of the current forward-looking picture in a corner map coordinate system through solution by a pnp method.
Optionally, because there is an error in the matching process, the relocation pose information calculated by pnp may also have a large error, so that a basis matrix may be established based on the 2d corner points and compared with the pnp-calculated angle vector, and if the angle error is greater than a threshold value, it is determined that the pnp-calculated pose is incorrect, and the positioning fails; and if the angle error is smaller than the threshold value, returning the repositioning pose information. The above-described process of deriving a corner-based localization based on a front view picture may also be referred to in general terms as "image processing", as shown in fig. 2. The angular point-based positioning method has the advantages of high precision and accurate positioning, so that the angular point-based positioning method is selected during initial positioning.
As shown in fig. 3, in some embodiments, the process of acquiring relocation pose information in step S110 further includes:
step 230: collecting a panoramic picture under the condition of non-initial positioning;
step 240: and extracting semantic features in the look-around picture, matching the semantic features with a semantic map to obtain pose information based on semantics, and taking the pose information based on semantics as the repositioning pose information.
In a specific embodiment, under the condition that initial positioning is performed, performing non-initial positioning, acquiring a 360-degree all-round view picture, and then extracting semantic features from the 360-degree all-round view picture, wherein the semantic features can comprise at least one of lane lines, road edges and vehicle positions; if the extracted semantic features are parking space points, describing the positions of the parking space points under the current rectangular coordinate system; if the extracted semantic features are lane lines or road edges, the distance and the angle under the polar coordinates are used for description. For example, a straight line is represented by xcos θ + ysin θ -r, where θ is the angle of the perpendicular vector and r is the distance from the straight line to the origin, and is represented in polar coordinates by ρ cos (θ - α), r, where the coordinates of the point are (ρ, θ). The converted parking space points and the lane lines (or road edges) can use the same filtering equation, wherein the gain calculation formula is as follows:
k=P’n+1H(HP’n+1HT+R)-1
wherein, P'n+1Is the covariance of the vehicle state quantity at the moment n +1, R is the observation covariance, H is the corresponding observation jacobian matrix, specifically, the observation jacobian matrices of the vehicle location point and the lane line (or the road edge) are respectively as follows:
Figure BDA0002696522660000061
in the above-mentioned car locus observation jacobian matrix,
Figure BDA0002696522660000062
refers to the x coordinate of n moments under the map coordinate system (world coordinate system), and the same principle is applied
Figure BDA0002696522660000063
Refers to the y coordinate at time n under the map coordinate system (world coordinate system).
Figure BDA0002696522660000064
After the description information obtained based on the above formula is compared with the semantic map, the pose information based on semantic positioning is obtained, as shown in fig. 2, the above process may also be referred to as "image processing" in a general way. Because the corner-based positioning method requires relatively rich scene texture, otherwise, positioning failure occurs, the corner-based positioning method occupies more resources in the corner map matching process, and the calculation is time-consuming, and the semantic-based positioning method can make up the defects to a certain extent, so that the semantic-based positioning method is preferred when non-initial positioning is performed, calculation resources can be saved, and repositioning pose information can be obtained quickly.
In some embodiments, after the step S240, the method further includes:
step 250: judging whether the all-around picture is effective or not;
step 260: collecting a forward-looking picture under the condition that the all-round looking picture is judged to be invalid;
step 270: based on the corner points in the forward-looking picture, the position and posture information based on the corner points is obtained by matching with a corner point map, and the position and posture information based on the corner points is used as the repositioning position and posture information.
In a specific embodiment, the determining whether the all-around view picture is valid, and if the pose information based on the semantics cannot be finally obtained, the determining that the all-around view picture is invalid may specifically include: whether semantic features can be extracted based on the ring-view picture or not, and if the semantic features cannot be extracted, judging that the ring-view picture is invalid; and if the extracted semantic features can not be matched with the semantic map, judging that the all-round-view picture is invalid. And after judging that the look-around picture is invalid, acquiring the look-ahead picture, matching with an angular point map based on angular points in the look-ahead picture to obtain angular point-based pose information, and taking the angular point-based pose information as the repositioning pose information. The specific step of obtaining the pose information based on the corner points in the forward-looking picture is the same as the step 220, and is not described herein again. The basis of semantic positioning is to identify at least one of semantic features such as lane lines, road edges and vehicle positions, if the features do not exist in the environment right or the identified features cannot be matched with a semantic map, semantic-based positioning cannot be performed, and then angular point-based positioning is required to be replaced, so that continuous positioning can be provided in the driving or moving process, and more stable repositioning pose information can be obtained.
In some embodiments, as shown in fig. 5, before step S110, the method further includes:
step 510: acquiring a current plane position;
step 520: and according to the plane position, obtaining a corner point map and a semantic map which take the plane position as a circle center and limit the radius range.
In a specific embodiment, the current plane position is obtained, the approximate plane position may be obtained by a positioning instrument such as a GPS, or the approximate position may be obtained by manually clicking on an electronic map, or the plane position may be obtained based on the relocation pose information of the initial positioning in the case where the initial positioning has been performed. And according to the plane position, obtaining a corner point map and a semantic map within a limited radius range by taking the plane position as a circle center, wherein the limited radius can be set manually. Because the corresponding corner map or semantic map is required to be loaded before the positioning based on the corners or the semantics, if all map data are loaded at the beginning, the storage space is occupied and the loading time is wasted, so that the map within a certain range can be downloaded based on the approximate position, the time is saved, and the resource occupation is reduced.
In some embodiments, the fusion localization method may further include:
calculating to obtain predicted pose information at a plurality of different moments based on the motion model;
the predicted pose information queue is generated based on the predicted pose information for the plurality of different times.
For example, the predicted pose information at a plurality of different time instants can be calculated by using a motion model, specifically, the motion model is a model which can use t1The pose (including position coordinates and direction angles), calculation parameters and movement time t' of the moment are calculated1The model of the vehicle pose (including the position coordinates and the direction angle) at the moment + t', as shown in fig. 2, the motion model may also be a vehicle motion model, the calculation parameters of which may include a wheel pulse message and a shift message, and based on the wheel pulse message, the shift message, the pose information at the previous moment, and the motion time from the previous moment, the instant predicted pose information may be obtained. Alternatively, the predicted pose information may be calculated once every fixed time, and the predicted information queue may be generated based on the predicted pose information at a plurality of different times. Alternatively, if too much predicted pose information is included in the queue, some temporally earlier predicted pose information may be discarded based on its predicted positioning time. The positioning mode based on the motion model is common, the calculation is simple, but the positioning accuracy is not high, and the positioning mode is used as a basis to be fused with the repositioning position information, so that the positioning position with higher accuracy can be obtained; the predicted pose information based on the motion model is stored in a queue form, so that at least one piece of predicted pose information which is closest to the repositioning pose information at the corresponding moment can be selected for fusion during fusion, and the fused positioning pose information with higher precision can be obtained.
In some embodiments, the fusion localization method further comprises:
according to the repositioning time of the repositioning pose information, at least one piece of predicted pose information before the repositioning time is obtained from the predicted pose information queue to serve as first predicted pose information, and/or at least one piece of predicted pose information after the repositioning time is obtained to serve as second predicted pose information; specifically, the relocation pose information includes relocation time, that is, the acquisition time of the relocation pose information, based on which, the predicted pose information before the relocation time is determined in the predicted pose information queue, and the predicted pose information after the relocation time is determined, for example, the relocation time in the relocation pose information is AM 8:00, in the predicted pose information queue, the predicted pose information before AM 8:00 at all times is found, and the predicted pose information after AM 8:00 at all times in the predicted pose information queue is found at the same time. Further, one of the predicted pose information before the acquisition time of the repositioning pose information is selected as first predicted pose information, and optionally, the most time-close one is selected as the first predicted pose information, for example, a plurality of pieces of predicted pose information before AM 8:00 are selected, and the time is AM 7:40, AM 7:50 and AM 7:58, namely, the predicted pose information of AM 7:58 is used as the first predicted pose information; similarly, one of the predicted pose information after the acquisition time of the repositioning pose information is selected as second predicted pose information, and optionally, the time closest to the second predicted pose information is also selected as the second predicted pose information, for example, the predicted pose information after AM 8:00 has a plurality of times, namely, AM 8:40, AM 8:50 and AM 8:58, i.e., the predicted pose information of AM 8:40 is used as the second predicted pose information.
In a specific embodiment, the repositioning pose information and the first predicted pose information and/or the second predicted pose information are associated and fused, and the obtained fused pose information can be divided into three cases. In the first case, as shown in fig. 2 and 6(a), the relocation time is between the predicted positioning times of the plurality of pieces of predicted pose information, i.e., the first predicted pose information t before the relocation time can be determined1And second predicted pose information t after the relocation time2,t1And t2The pose increment of (x) is p (xy theta), and repositioning pose information tiAnd t2Has a time difference of ti, t1And t2The time difference of (1) is t2, to obtain t2Time tiThe pose increment of the moment is p.ti/t2Using linear interpolation method to obtain t2Recurrence values of the time pose; updating the covariance, wherein the update equation of the covariance is as follows:
Figure BDA0002696522660000081
wherein f isxkCan be calculated according to the function CaculateFxk (dxR, PoseTheta, dTab), and pk can be calculated according toThe function CaculatePK (vehicle _ RR, vehicle _ RL, PoseTheta, dTheta) results. Finally, based on the recursion value and the repositioning pose information t of the poseiAnd performing EKF fusion on the updated covariance to obtain fused pose information.
In the second case, repositioning pose information tiThe time is too early, and is before the predicted positioning time corresponding to all the predicted pose information in the predicted pose information queue, as shown in fig. 2 and fig. 6(b), at this time, the predicted pose information t after the relocation pose information is acquired2Directly abandoning repositioning pose information and calculating t2As fused pose information. Alternatively, t2And is the earliest predicted pose information in the predicted pose information queue.
In the last case, repositioning pose information tiToo late in time, after the predicted positioning time corresponding to all predicted pose information in the predicted pose information queue, i.e., only the first predicted pose information t whose time is before the relocation time can be determined1At this time, as shown in fig. 2 and 6(c), if the relocation time and the first predicted pose information t are obtained1If the difference value of the predicted positioning time is larger than a given threshold value, directly discarding the repositioning pose information tiThe first predicted pose information t is used1As fused pose information; if the repositioning time is equal to the first predicted pose information t1If the time difference is less than or equal to a given threshold value, a predicted pose information t 'whose time is before the first predicted pose information is selected from the predicted pose information queue, and the first predicted pose information t' is combined1And t', obtaining t by using a linear interpolation method1Updating the covariance after the recurrence value of the pose at the moment is obtained, and referring to the first case, the step of specifically calculating the recurrence value and updating the covariance is finally based on the recurrence value of the pose and repositioning pose information tiAnd performing EKF fusion on the updated covariance to obtain fused pose information.
In some embodiments, updating the predicted pose information queue further comprises: and updating other prediction pose information after the fused pose information corresponds to time in the prediction pose information queue, and specifically, the updating step involves utilizing a motion model containing a wheel pulse message and a gear message to obtain an updated prediction pose information queue. For example, after the fused pose information is obtained, all the predicted pose information of which the predicted positioning time occurs after the time corresponding to the fused pose information in the predicted pose information queue is found, for example, after the fused pose information of which the predicted positioning time is AM 8:40 is obtained, the predicted pose information is inserted into the predicted pose queue, and the predicted pose information at the moments AM 8:50 and AM 8:58 in the predicted pose information queue is recalculated by using a motion model. Optionally, the map point cloud can be released according to the fused pose information.
In some embodiments, the step S140 further includes:
and obtaining target positioning pose information based on the updated predicted pose information queue. Specifically, as shown in fig. 2, based on the updated predicted pose information queue, the newly added predicted pose information in the predicted pose information is found and issued as the target positioning pose. Optionally, prior to release, the latest predicted pose information may also be stored with the corresponding process noise, jacobian matrix, and the like.
Fig. 7 shows a block diagram of a fusion positioning apparatus 700 according to an embodiment of the present application. As shown in fig. 7, the apparatus may include:
the repositioning module 710 is configured to acquire a picture related to a surrounding environment, and acquire repositioning pose information based on the picture related to the surrounding environment;
a fusion module 720, configured to fuse at least one piece of predicted pose information in the predicted pose information queue based on the relocation pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information;
an updating module 730, configured to update the predicted pose information queue based on the fused pose information;
and the pose determining module 740 is configured to obtain target positioning pose information based on the updated predicted pose information queue.
In one embodiment, as shown in FIG. 8, the relocation module 710 includes:
a first forward-view picture collecting unit 711 for collecting a forward-view picture in case of performing initial positioning;
the first corner positioning unit 712 is configured to obtain corner-based pose information by matching with a corner map based on corners in the forward-looking picture, and use the corner-based pose information as the repositioning pose information.
In one embodiment, as shown in FIG. 8, the relocation module 710 further includes:
a panoramic image acquisition unit 713, configured to acquire a panoramic image in the case of performing non-initial positioning;
and the semantic locating unit 714 is used for extracting semantic features in the look-around picture, obtaining pose information based on semantics through matching with a semantic map, and taking the pose information based on semantics as the repositioning pose information.
In one embodiment, as shown in FIG. 8, the relocation module 710 further includes:
a judging unit 715, configured to judge whether the all-around picture is valid;
a second forward-view picture collecting unit 716, configured to collect a forward-view picture when the panoramic picture is determined to be invalid;
a second corner point positioning unit 717, configured to obtain corner point-based pose information by matching with a corner point map based on a corner point in the forward-looking picture, and use the corner point-based pose information as the repositioning pose information.
In one embodiment, as shown in fig. 10, the fusion positioning device 700 further comprises:
a position obtaining module 750, configured to obtain a current plane position;
and the map loading module 760 is configured to obtain a corner point map and a semantic map within a limited radius range with the plane position as a center according to the plane position.
In one embodiment, the fusion positioning device 700 further comprises:
calculating to obtain predicted pose information at a plurality of different moments based on the motion model;
the predicted pose information queue is generated based on the predicted pose information for the plurality of different times.
In one embodiment, as shown in fig. 11, the fusion module 720 further comprises:
a predicted pose information selection unit 721 that acquires, from the predicted pose information queue, at least one piece of predicted pose information before the relocation time as first predicted pose information and/or at least one piece of predicted pose information after the relocation time as second predicted pose information, according to the relocation time of the relocation pose information;
and a predicted pose information fusion unit 722, configured to fuse, based on the repositioning pose information, the first predicted pose information and/or the second predicted pose information, to obtain fused pose information.
In one embodiment, the update module 730 further comprises:
and an update queue unit 731, configured to update other predicted pose information in the predicted pose information queue after the fused pose information corresponds to time, so as to obtain an updated predicted pose information queue.
In one embodiment, the pose determination module 740 includes:
an obtaining unit 741, configured to obtain target positioning pose information based on the target prediction pose information.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
It should be noted that, although the fusion positioning method and apparatus are described above, those skilled in the art will understand that the present application should not be limited thereto. In fact, the user can flexibly set the fusion positioning method and device according to personal preference and/or practical application scenes, for example, the repositioning method is not limited to positioning based on angular points or positioning based on semantics, the predicted pose information is not limited to a motion model, and other positioning models can be adopted as long as positioning with high precision and high robustness can be finally obtained.
Therefore, by fusion positioning, the fusion positioning method and device according to the above embodiments of the present application can combine multiple positioning methods to obtain a more accurate and stable positioning result.
Fig. 9 shows a block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic apparatus includes: a memory 910 and a processor 920, the memory 910 having stored therein instructions executable on the processor 920. The processor 920, when executing the instructions, implements the fused position method in the above embodiments. The number of the memory 910 and the processor 920 may be one or more. The electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The electronic device may further include a communication interface 930 for communicating with an external device for data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 920 may process instructions for execution within the electronic device, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Optionally, in an implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on a chip, the memory 910, the processor 920 and the communication interface 930 may complete communication with each other through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910) storing computer instructions, which when executed by a processor implement the methods provided in embodiments of the present application.
Alternatively, the memory 910 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device of the fusion location, and the like. Further, the memory 910 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 910 may optionally include memory located remotely from the processor 920, which may be connected to a convergence located electronic device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A fusion localization method, comprising:
acquiring pictures related to the surrounding environment, and acquiring repositioning pose information based on the pictures related to the surrounding environment;
fusing at least one piece of predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information;
updating the predicted pose information queue based on the fused pose information;
and obtaining target positioning pose information based on the updated predicted pose information queue.
2. The fusion positioning method according to claim 1, wherein the acquiring of the picture related to the surrounding environment and the acquiring of the repositioning pose information based on the picture related to the surrounding environment comprise:
collecting a forward-looking picture under the condition of initial positioning;
based on the corner points in the forward-looking picture, obtaining position and posture information based on the corner points by matching with a corner point map, and taking the position and posture information based on the corner points as the repositioning position and posture information.
3. The fusion positioning method according to claim 1, wherein the acquiring of the picture related to the surrounding environment and the acquiring of the repositioning pose information based on the picture related to the surrounding environment comprise:
collecting a panoramic picture under the condition of non-initial positioning;
and extracting semantic features in the all-around-view picture, matching the semantic features with a semantic map to obtain pose information based on semantics, and taking the pose information based on semantics as the repositioning pose information.
4. The fusion localization method of claim 3, further comprising:
judging whether the all-around picture is effective or not;
collecting a forward-looking picture under the condition that the all-around picture is judged to be invalid;
based on the corner points in the forward-looking picture, obtaining position and posture information based on the corner points by matching with a corner point map, and taking the position and posture information based on the corner points as the repositioning position and posture information.
5. The fusion localization method of any one of claims 2-4, further comprising:
acquiring a current plane position;
and acquiring a corner point map and a semantic map within a limited radius range by taking the plane position as a circle center according to the plane position.
6. The fusion localization method of claim 1, further comprising:
calculating to obtain predicted pose information at a plurality of different moments based on the motion model;
generating the predicted pose information queue based on the predicted pose information for the plurality of different times.
7. The fusion positioning method according to claim 1, wherein fusing at least one of the predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information comprises:
according to the repositioning time of the repositioning pose information, at least one piece of predicted pose information before the repositioning time is obtained from the predicted pose information queue to serve as first predicted pose information, and/or at least one piece of predicted pose information after the repositioning time is obtained from the predicted pose information queue to serve as second predicted pose information;
and fusing the repositioning pose information and the first prediction pose information and/or the second prediction pose information to obtain fused pose information.
8. The method of claim 7, wherein the updating the queue of predicted pose information further comprises:
and updating other prediction pose information after the fused pose information corresponds to time in the prediction pose information queue to obtain an updated prediction pose information queue.
9. The fusion positioning method according to claim 1, wherein obtaining target positioning pose information based on the updated predicted pose information queue comprises:
and using the latest generated predicted pose information in the updated predicted pose information queue as target positioning pose information.
10. A fusion positioning apparatus, comprising:
the repositioning module is used for acquiring pictures related to the surrounding environment and acquiring repositioning pose information based on the pictures related to the surrounding environment;
the fusion module is used for fusing at least one piece of predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; the predicted pose information queue comprises a plurality of pieces of predicted pose information;
the updating module is used for updating the predicted pose information queue based on the fused pose information;
and the pose determining module is used for obtaining target positioning pose information based on the updated predicted pose information queue.
11. The apparatus of claim 10, wherein the relocation module comprises:
the first front-view picture acquisition unit is used for acquiring a front-view picture under the condition of initial positioning;
and the first corner positioning unit is used for obtaining corner-based pose information by matching with a corner map based on corners in the forward-looking picture, and taking the corner-based pose information as the repositioning pose information.
12. The apparatus of claim 10, wherein the relocation module comprises:
the all-round-view picture acquisition unit is used for acquiring all-round-view pictures under the condition of non-initial positioning;
and the semantic positioning unit is used for extracting semantic features in the look-around picture, obtaining pose information based on semantics through matching with a semantic map, and taking the pose information based on semantics as the repositioning pose information.
13. The apparatus of claim 12, wherein the relocation module further comprises:
the judging unit is used for judging whether the all-around picture is valid or not;
the second front-view picture acquisition unit is used for acquiring a front-view picture under the condition that the all-around view picture is judged to be invalid;
and the second corner positioning unit is used for obtaining corner-based pose information by matching with a corner map based on corners in the forward-looking picture, and taking the corner-based pose information as the repositioning pose information.
14. The apparatus according to any one of claims 11-13, further comprising:
the position acquisition module is used for acquiring the current plane position;
and the map loading module is used for acquiring a corner point map and a semantic map within a limited radius range by taking the plane position as a circle center according to the plane position.
15. The apparatus of claim 10, further comprising:
calculating to obtain predicted pose information at a plurality of different moments based on the motion model;
generating the predicted pose information queue based on the predicted pose information for the plurality of different times.
16. The apparatus of claim 10, wherein the fusion module further comprises:
a predicted pose information selecting unit configured to acquire, from the predicted pose information queue, at least one piece of predicted pose information before the relocation time as first predicted pose information and/or at least one piece of predicted pose information after the relocation time as second predicted pose information, according to the relocation time of the relocation pose information;
and the prediction pose information fusion unit is used for fusing the repositioning pose information with the first prediction pose information and/or the second prediction pose information to obtain fused pose information.
17. The apparatus of claim 16, wherein the update module further comprises:
and the updating queue unit is used for updating other predicted pose information after the fused pose information corresponds to time in the predicted pose information queue to obtain an updated predicted pose information queue.
18. The apparatus of claim 10, wherein the pose determination module further comprises:
and the obtaining unit is used for taking the latest generated predicted pose information in the updated predicted pose information queue as target positioning pose information.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A computer readable storage medium having stored therein computer instructions which, when executed by a processor, implement the method of any one of claims 1-9.
CN202011007686.9A 2020-09-23 2020-09-23 Fusion positioning method and device Active CN112150550B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011007686.9A CN112150550B (en) 2020-09-23 2020-09-23 Fusion positioning method and device
PCT/CN2021/084792 WO2022062355A1 (en) 2020-09-23 2021-03-31 Fusion positioning method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011007686.9A CN112150550B (en) 2020-09-23 2020-09-23 Fusion positioning method and device

Publications (2)

Publication Number Publication Date
CN112150550A true CN112150550A (en) 2020-12-29
CN112150550B CN112150550B (en) 2021-07-27

Family

ID=73897813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011007686.9A Active CN112150550B (en) 2020-09-23 2020-09-23 Fusion positioning method and device

Country Status (2)

Country Link
CN (1) CN112150550B (en)
WO (1) WO2022062355A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022062355A1 (en) * 2020-09-23 2022-03-31 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and apparatus
CN114295126A (en) * 2021-12-20 2022-04-08 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method based on inertial measurement unit

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9440354B2 (en) * 2009-11-06 2016-09-13 Irobot Corporation Localization by learning of wave-signal distributions
CN110118554A (en) * 2019-05-16 2019-08-13 深圳前海达闼云端智能科技有限公司 SLAM method, apparatus, storage medium and device based on visual inertia
CN110147705A (en) * 2018-08-28 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and electronic equipment of view-based access control model perception
CN110231028A (en) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 Aircraft navigation methods, devices and systems
US20200047340A1 (en) * 2018-08-13 2020-02-13 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN110986988A (en) * 2019-12-20 2020-04-10 上海有个机器人有限公司 Trajectory estimation method, medium, terminal and device fusing multi-sensor data
CN111098335A (en) * 2019-12-26 2020-05-05 浙江欣奕华智能科技有限公司 Method and device for calibrating odometer of double-wheel differential drive robot
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN111272165A (en) * 2020-02-27 2020-06-12 清华大学 Intelligent vehicle positioning method based on characteristic point calibration
CN111536964A (en) * 2020-07-09 2020-08-14 浙江大华技术股份有限公司 Robot positioning method and device, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109307508B (en) * 2018-08-29 2022-04-08 中国科学院合肥物质科学研究院 Panoramic inertial navigation SLAM method based on multiple key frames
CN109945858B (en) * 2019-03-20 2021-04-13 浙江零跑科技有限公司 Multi-sensing fusion positioning method for low-speed parking driving scene
CN110109465A (en) * 2019-05-29 2019-08-09 集美大学 A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle
CN111274974B (en) * 2020-01-21 2023-09-01 阿波罗智能技术(北京)有限公司 Positioning element detection method, device, equipment and medium
CN112150550B (en) * 2020-09-23 2021-07-27 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9440354B2 (en) * 2009-11-06 2016-09-13 Irobot Corporation Localization by learning of wave-signal distributions
CN110231028A (en) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 Aircraft navigation methods, devices and systems
US20200047340A1 (en) * 2018-08-13 2020-02-13 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN110147705A (en) * 2018-08-28 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and electronic equipment of view-based access control model perception
CN110118554A (en) * 2019-05-16 2019-08-13 深圳前海达闼云端智能科技有限公司 SLAM method, apparatus, storage medium and device based on visual inertia
CN110986988A (en) * 2019-12-20 2020-04-10 上海有个机器人有限公司 Trajectory estimation method, medium, terminal and device fusing multi-sensor data
CN111098335A (en) * 2019-12-26 2020-05-05 浙江欣奕华智能科技有限公司 Method and device for calibrating odometer of double-wheel differential drive robot
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN111272165A (en) * 2020-02-27 2020-06-12 清华大学 Intelligent vehicle positioning method based on characteristic point calibration
CN111536964A (en) * 2020-07-09 2020-08-14 浙江大华技术股份有限公司 Robot positioning method and device, and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022062355A1 (en) * 2020-09-23 2022-03-31 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and apparatus
CN114295126A (en) * 2021-12-20 2022-04-08 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method based on inertial measurement unit
CN114295126B (en) * 2021-12-20 2023-12-26 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method based on inertial measurement unit

Also Published As

Publication number Publication date
CN112150550B (en) 2021-07-27
WO2022062355A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
JP6830139B2 (en) 3D data generation method, 3D data generation device, computer equipment and computer readable storage medium
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
CN107223244B (en) Localization method and device
CN109727288B (en) System and method for monocular simultaneous localization and mapping
KR102367361B1 (en) Location measurement and simultaneous mapping method and device
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN112150550B (en) Fusion positioning method and device
CN111833447A (en) Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN109405821B (en) Method and device for positioning and target equipment
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
CN114120149B (en) Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
JP2022509329A (en) Point cloud fusion methods and devices, electronic devices, computer storage media and programs
CN112991441A (en) Camera positioning method and device, electronic equipment and storage medium
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN110298320B (en) Visual positioning method, device and storage medium
CN111862150A (en) Image tracking method and device, AR device and computer device
CN113126117B (en) Method for determining absolute scale of SFM map and electronic equipment
CN114708571A (en) Parking space marking method and device for automatic parking based on domain controller platform
CN113808196A (en) Plane fusion positioning method and device, electronic equipment and storage medium
CN115249407A (en) Indicating lamp state identification method and device, electronic equipment, storage medium and product
CN115128655B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN111398961B (en) Method and apparatus for detecting obstacles
CN115619958B (en) Target aerial view generation method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant