WO2022062355A1 - Fusion positioning method and apparatus - Google Patents

Fusion positioning method and apparatus Download PDF

Info

Publication number
WO2022062355A1
WO2022062355A1 PCT/CN2021/084792 CN2021084792W WO2022062355A1 WO 2022062355 A1 WO2022062355 A1 WO 2022062355A1 CN 2021084792 W CN2021084792 W CN 2021084792W WO 2022062355 A1 WO2022062355 A1 WO 2022062355A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose information
predicted
predicted pose
queue
positioning
Prior art date
Application number
PCT/CN2021/084792
Other languages
French (fr)
Chinese (zh)
Inventor
丁磊
戴必林
Original Assignee
华人运通(上海)自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华人运通(上海)自动驾驶科技有限公司 filed Critical 华人运通(上海)自动驾驶科技有限公司
Publication of WO2022062355A1 publication Critical patent/WO2022062355A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present application relates to the field of positioning, and in particular, to a fusion positioning method based on semantics and corner point information.
  • GPS Global Positioning System
  • the embodiments of the present application provide a fusion positioning method and device to solve the problems existing in the related art, and the technical solutions are as follows:
  • an embodiment of the present application provides a fusion positioning method, including:
  • At least one predicted pose information in the predicted pose information queue is fused to obtain the fused pose information; wherein, the predicted pose information queue contains multiple predicted pose information;
  • the target positioning pose information is obtained.
  • an embodiment of the present application provides a fusion positioning device, including:
  • the relocation module is used to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
  • a fusion module configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain the fused pose information; wherein the predicted pose information queue contains multiple predictions pose information;
  • an update module for updating the predicted pose information queue based on the fused pose information
  • the pose determination module is used for obtaining target positioning pose information based on the updated predicted pose information queue.
  • an embodiment of the present application provides an electronic device, the electronic device includes: at least one processor; and a memory connected in communication with the at least one processor; wherein the memory stores instructions executable by the at least one processor , so that at least one processor can execute the above method for fusion positioning.
  • embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the method in any one of the implementation manners of the above aspects is executed.
  • the present invention obtains the repositioning pose information through different positioning methods based on pictures related to the surrounding environment, which can ensure that the repositioning pose information can be obtained in different environments, ensuring that the repositioning pose information can be obtained in different environments. High robustness is achieved; the present invention also uses the repositioning pose information to update the predicted pose information queue, and obtains the target positioning pose information from the updated predicted pose information queue, and by fusing the environment-based repositioning pose information And the predicted positioning pose information, and finally obtain the target positioning pose information, which can realize the high-precision positioning of the vehicle or other movable machinery and equipment.
  • FIG. 1 is a schematic diagram of a fusion positioning method according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a fusion positioning method according to another embodiment of the present application.
  • FIG. 3 is a schematic diagram of a fusion positioning method according to another embodiment of the present application.
  • FIG. 4 is a schematic diagram of corner extraction according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a fusion positioning method according to another embodiment of the present application.
  • FIG. 6 is a schematic diagram illustrating the association between a repositioned pose and a predicted pose queue according to another embodiment of the present application.
  • FIG. 7 is a structural block diagram of a fusion positioning apparatus according to an embodiment of the present application.
  • FIG. 8 is a structural block diagram of a relocation module in a fusion location device according to another embodiment of the present application.
  • FIG. 9 is a block diagram of an electronic device used to implement the fusion positioning method according to the embodiment of the present application.
  • FIG. 10 is a structural block diagram of a fusion positioning apparatus according to another embodiment of the present application.
  • FIG. 11 is a structural block diagram of a fusion module in a fusion positioning apparatus according to another embodiment of the present application.
  • FIG. 1 shows a flowchart of a fusion positioning method according to an embodiment of the present application.
  • the fusion positioning method may include:
  • Step S110 Collect pictures related to the surrounding environment, and obtain repositioning pose information based on the pictures related to the surrounding environment;
  • Step S120 fuse at least one predicted pose information in the predicted pose information queue based on the repositioned pose information to obtain the fused pose information; wherein the predicted pose information queue includes multiple predicted poses information;
  • Step S130 based on the fused pose information, update the predicted pose information queue
  • Step S140 Based on the updated predicted pose information queue, obtain target positioning pose information.
  • the fusion positioning method can be used for vehicles, and can also be used for mobile devices or equipment that need to be positioned at any time, such as robots.
  • the picture related to the surrounding environment may be a front-view picture used for corner localization, or a look-around picture used for semantic localization.
  • the corresponding information in the picture is extracted, and based on different pictures, different positioning methods are adopted to obtain the repositioning pose information.
  • the repositioning pose information may specifically include the repositioning position coordinates, the repositioning direction angle and the repositioning time, wherein , and the relocation time is the acquisition time corresponding to the above picture related to the surrounding environment.
  • the predicted pose information queue contains multiple predicted pose information calculated at different times using a motion model, and the motion model is a The pose (including position coordinates and direction angle) at time t 1 , the calculation parameters and the motion time t', the model of the pose (including position coordinates and direction angle) at time t 1 +t' is calculated, and each predicted pose information can include predicted position coordinates, predicted direction angle and corresponding predicted positioning time.
  • the repositioning time in the repositioning pose information at least one predicted pose information that is close to the repositioning time is found, and the above repositioning pose is fused with it to obtain the fused pose information. Then, based on the fused pose information, update the predicted pose information queue. Specifically, based on the predicted positioning time of the fused pose information, the predicted pose information is recalculated by using the motion model and the fused pose information. The predicted pose information in the queue whose time is after the predicted positioning time, and the predicted pose information queue is updated.
  • the target positioning pose information is obtained. For example, based on the updated predicted pose information queue, the newly added predicted pose information in the predicted pose information is found as the target positioning position. Posture.
  • the repositioning pose information can be obtained based on different positioning methods, and then fused with the predicted pose information to update the predicted pose information queue, and determine the final target positioning pose based on the updated predicted pose information queue , the accuracy of the fused positioning pose information is higher, which ensures the high precision of the fusion positioning method; different positioning methods can match different moving driving environments, ensuring that the repositioning pose information can be obtained no matter what the environment is. The stability and high robustness of the fusion positioning method are guaranteed.
  • FIG. 3 is a flowchart of an implementation of obtaining repositioning pose information in a fusion positioning method according to an embodiment of the present application.
  • the process of acquiring the repositioning pose information in the above step S110 includes:
  • Step 210 in the case of performing initial positioning, collect a front-view picture
  • Step 220 Based on the corners in the front-view picture, by matching with the corner map, obtain corner-based pose information, and use the corner-based pose information as the repositioning pose information.
  • the initial positioning may be the first positioning during driving or moving, or the second positioning after a long period of time.
  • a front-view image is collected, and corner points in the front-view image are extracted.
  • a Fast method can be used to extract corner points.
  • the Fast method mainly focuses on The 16 pixel corner points on the circular window near the pixel point, as shown in Figure 4, p is the center pixel point, and the point pixel marked by the white box is the 16 pixel corner points that need to be extracted, and the Brief descriptor is used. Describes the extracted 16 pixel corners. Then, the extracted corners are matched with the corner map to obtain multiple candidate keyframes. For example, based on the Brief descriptor, the Bow dictionary is used to search for the four matching keyframes with the highest scores in the corner map.
  • This step can also be called Perform brute force matching; after obtaining multiple candidate keyframes, select the optimal keyframe from them, and then determine whether the matching is successful based on the matching corners in the optimal keyframe, for example, select the corner with the highest score and a score higher than the threshold. Point the map frame, and use the Hamming distance to match the corner points in the map with the current corner points. When the minimum Hamming distance of the matching is less than the threshold, the matching is considered successful. Finally, based on the successfully matched corner map, the repositioning pose information is obtained. Specifically, because the corners included in the corner map have 3D positions, the 2D corner and corner map extracted from the front-view image can be obtained. The matching of the 3d corner points in the middle, and then solved by the pnp method, and finally the repositioning pose information of the current front-view image in the corner map coordinate system is obtained.
  • the repositioning pose information calculated by pnp may also have large errors, so a fundamental matrix can be established based on the above 2d corner points and compared with the angle vector calculated by pnp. , if the angle error is greater than the threshold, it is considered that the pose calculated by pnp is wrong, and the positioning fails; if the angle error is less than the threshold, the repositioning pose information is returned.
  • the above-mentioned process of finally obtaining corner-based positioning based on the front-view picture can also be generally referred to as “image processing”.
  • the above-mentioned positioning method based on corner points has the advantages of high precision and accurate positioning, so the positioning method based on corner points is selected in the initial positioning.
  • the process of acquiring the repositioning pose information in the above step S110 further includes:
  • Step 230 in the case of performing non-initial positioning, collect a look-around picture
  • Step 240 Extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
  • a 360° look-around picture is collected, and then semantic features are extracted from the 360° look-around image, and the semantic features may include lane lines, road edges , at least one of parking space points; if the extracted semantic feature is a parking space point, the position of the parking space point is described in the current Cartesian coordinate system; if the extracted semantic feature is a lane line or a road edge, then use Distances and angles in polar coordinates are described.
  • the same filter equation can be used after converting the parking spot and the lane line (or road edge), where the gain calculation formula is as follows:
  • P' n+1 is the covariance of the vehicle state quantity n+1 time
  • R is the observation covariance
  • H is the corresponding observation Jacobian matrix.
  • the pose information based on semantic localization is obtained, as shown in FIG. 2 , the above process can also be summarized as “image processing”.
  • image processing Because the corner-based positioning method requires relatively rich scene textures, otherwise the positioning will fail, and the corner-based positioning method occupies a lot of resources in the process of corner map matching, and the calculation is time-consuming, while the semantic-based positioning method The method can make up for the above shortcomings to a certain extent. Therefore, when performing non-initial positioning, the semantic-based positioning method is preferred, which can save computing resources and obtain repositioning pose information faster.
  • the method further includes:
  • Step 250 Determine whether the look-around picture is valid
  • Step 260 in the case of judging that the surround view picture is invalid, collect the front view picture
  • Step 270 Based on the corners in the front-view picture, by matching with the corner map, obtain corner-based pose information, and use the corner-based pose information as the repositioning pose information.
  • the look-around picture is judged to be invalid, which may specifically include: whether semantic features can be extracted based on the look-around image, and if it cannot be extracted If the semantic features are extracted, the look-around picture is judged to be invalid; whether the extracted semantic features can be matched with the semantic map, if they cannot be matched, the look-around picture is judged to be invalid.
  • the front-view picture is collected, and based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the Relocate pose information.
  • the specific steps of obtaining the pose information based on the corner points based on the corner points in the front-view picture are the same as the above step 220, and are not repeated here.
  • the basis of semantic localization recognizes at least one of semantic features such as lane lines, road edges, and parking spots. If these features happen to be absent in the environment, or the recognized features cannot be matched with the semantic map, semantic-based localization cannot be performed. , at this time, it is necessary to replace the positioning based on corner points to ensure that continuous positioning can be provided during driving or moving, and relatively stable repositioning pose information can be obtained.
  • step S110 further includes:
  • Step 510 obtain the current plane position
  • Step 520 According to the plane position, obtain a corner map and a semantic map within a limited radius with the plane position as the center.
  • the approximate plane position can be obtained through a positioning instrument such as GPS, or the approximate position can be obtained by manually clicking on the electronic map, or, after In the case of initial positioning, the plane position can be obtained based on the repositioned pose information of the initial positioning.
  • a corner map and a semantic map within a limited radius are obtained with the plane position as the center, and the limited radius can be set manually. Because whether it is corner-based or semantic-based positioning, the corresponding corner map or semantic map needs to be loaded. If all map data is loaded at the beginning, it will take up storage space and waste loading time. Therefore, a map within a certain range can be downloaded based on the approximate location, which not only saves time, but also reduces resource occupation.
  • the fusion positioning method may further include:
  • the predicted pose information queue is generated based on the predicted pose information at the multiple different times.
  • the motion model can be used to calculate the predicted pose information at different times.
  • the motion model is a motion model that can use the pose (including position coordinates and direction angle), calculation parameters and motion time t' at time t1 .
  • the model of the vehicle pose (including position coordinates and direction angle) at time t 1 +t' is calculated, as shown in Figure 2, the motion model can also be a vehicle motion model, and the calculation parameters can include wheel pulse messages and gears.
  • Position message based on wheel pulse message, gear position message, pose information at the last moment, and motion time from the last moment, the real-time predicted pose information can be obtained.
  • the predicted pose information may be calculated every fixed time, and a predicted information queue may be generated based on the predicted pose information at multiple different times.
  • some temporally earlier predicted pose information may be discarded based on its predicted positioning time.
  • the positioning method based on the motion model is more commonly used, and the calculation is relatively simple, but the positioning accuracy is not high. Taking it as the basis and fusing it with the repositioning pose information, a more accurate positioning pose can be obtained; the prediction based on the motion model is used.
  • the pose information is stored in the form of a queue, which can ensure that during fusion, at least one predicted pose information that is closest to the time corresponding to the repositioning pose information is selected for fusion, so as to obtain fused positioning pose information with higher accuracy.
  • the fusion positioning method further includes:
  • the relocation time of the relocation pose information obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose information, and/or at least one piece after the relocation time
  • One piece of predicted pose information is used as the second predicted pose information; specifically, the repositioning pose information includes the repositioning time, that is, the acquisition time of the repositioning pose information.
  • the repositioning pose information includes the repositioning time, that is, the acquisition time of the repositioning pose information.
  • the relocation time in the relocation pose information is AM 8:00, in the predicted pose information queue , find all the predicted pose information before AM 8:00, and find all the predicted pose information after AM 8:00 in the predicted pose information queue.
  • select one as the first predicted pose information selects one with the closest time as the first predicted pose information, such as There are multiple predicted pose information before AM 8:00, the time is AM 7:40, AM 7:50, AM 7:58, that is, the predicted pose information of AM 7:58 is used as the first predicted pose information ;
  • select one as the second predicted pose information optionally, also select the one with the closest time as the second predicted pose information , for example, there are multiple predicted pose information after AM 8:00, the time is AM 8:40, AM 8:50, AM 8:58, that is, the predicted pose information of AM 8:40 is used as the second predicted position posture information.
  • the fused pose information can be divided into three types condition.
  • the relocation time is between the predicted location times of multiple predicted pose information, so the first predicted pose information t 1 before the relocation time can be determined.
  • the second predicted pose information t 2 after the relocation time the pose increments of t 1 and t 2 are ⁇ p( ⁇ x ⁇ y ⁇ ), and the time difference between the relocation pose information t i and t 2 is ⁇ ti, t 1 and t
  • the update equation of covariance is:
  • f xk can be calculated according to the function CaculateFxk(dxR, PoseTheta, dTheta), and pk can be obtained according to the function CaculatePk(vehicle_RR, vehicle_RL, PoseTheta, dTheta).
  • EKF fusion is performed based on the recursive value of the above pose, the repositioned pose information t i and the updated covariance, and the fused pose information is obtained.
  • the second case is that the time for relocating the pose information t i is too early, before the predicted positioning time corresponding to all the predicted pose information in the predicted pose information queue, as shown in Figure 2 and Figure 6(b), at this time , obtain the predicted pose information t 2 after the repositioning pose information, directly discard the repositioning pose information, and use t 2 as the fused pose information.
  • t 2 is also the earliest piece of predicted pose information in the predicted pose information queue.
  • a predicted pose information t 1 as shown in Figure 2 and Figure 6(c), at this time, if the difference between the relocation time and the predicted positioning time of the first predicted pose information t 1 is greater than a given threshold, then directly Discard the repositioning pose information t i , and use the first predicted pose information t 1 as the fused pose information; if the time difference between the repositioning time and the first predicted pose information t 1 is less than or equal to a given threshold, then predict from In the pose information queue, a predicted pose information t' whose time is before the first predicted pose information is selected, and then combined with the first predicted pose information t 1 and t ', the linear interpolation method is used to obtain the time t 1 The recursive value of the pose is obtained.
  • the covariance is updated.
  • the specific steps of calculating the recursive value and updating the covariance refer to the first case above.
  • the information t i and the updated covariance are EKF fused to obtain the fused pose information.
  • updating the predicted pose information queue further includes: in the predicted pose information queue, the fused pose information corresponds to other predicted pose information after the time.
  • the updating step It involves using the motion model containing the wheel pulse message and the gear position message to obtain the updated predicted pose information queue. For example, after obtaining the fused pose information, find the predicted pose information in the queue of predicted pose information, all the predicted pose information whose predicted positioning time occurs after the corresponding time of the fused pose information, for example, when the predicted positioning time is AM 8: After the fused pose information of 40, insert it into the predicted pose queue, and then use the motion model to recalculate the predicted pose information at AM 8:50 and AM 8:58 in the predicted pose information queue.
  • the map point cloud can also be released according to the fused pose information.
  • step S140 further includes:
  • the target positioning pose information is obtained. Specifically, as shown in FIG. 2 , based on the updated predicted pose information queue, a newly added predicted pose information in the predicted pose information is found and released as the target positioning pose.
  • the latest predicted pose information may also be stored together with the corresponding process noise, Jacobian matrix, and the like.
  • FIG. 7 shows a structural block diagram of a fusion positioning apparatus 700 according to an embodiment of the present application.
  • the apparatus may include:
  • a relocation module 710 configured to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
  • a fusion module 720 configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; wherein the predicted pose information queue contains multiple Predicted pose information;
  • an update module 730 configured to update the predicted pose information queue based on the fused pose information
  • the pose determination module 740 is configured to obtain target positioning pose information based on the updated predicted pose information queue.
  • the relocation module 710 includes:
  • the first front-view picture acquisition unit 711 is configured to acquire a front-view picture under the condition of initial positioning
  • the first corner locating unit 712 is configured to obtain corner-based pose information by matching with the corner map based on the corners in the front-view picture, and use the corner-based pose information as the relocation. pose information.
  • the relocation module 710 further includes:
  • a look-around picture acquisition unit 713 configured to collect a look-around picture in the case of performing non-initial positioning
  • the semantic positioning unit 714 is configured to extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
  • the relocation module 710 further includes:
  • Judging unit 715 for judging whether the look-around picture is valid
  • the second front-view picture acquisition unit 716 is configured to acquire a front-view picture when it is judged that the surround-view picture is invalid;
  • the second corner locating unit 717 is configured to obtain corner-based pose information by matching with the corner map based on the corners in the front-view picture, and use the corner-based pose information as the relocation pose information.
  • the fusion positioning apparatus 700 further includes:
  • a position obtaining module 750 used for obtaining the current plane position
  • the map loading module 760 is configured to obtain, according to the plane position, a corner point map and a semantic map within a limited radius with the plane position as the center of the circle.
  • the fusion positioning apparatus 700 further includes:
  • the predicted pose information queue is generated based on the predicted pose information at the multiple different times.
  • the fusion module 720 further includes:
  • the predicted pose information selection unit 721 is configured to obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose information according to the relocation time of the relocated pose information, And/or at least one predicted pose information after the relocation time is used as the second predicted pose information;
  • the predicted pose information fusion unit 722 is configured to fuse the first predicted pose information and/or the second predicted pose information based on the repositioned pose information to obtain fused pose information.
  • the update module 730 further includes:
  • the updating queue unit 731 is configured to update other predicted pose information after the corresponding time in the predicted pose information queue, the fused pose information, to obtain an updated predicted pose information queue.
  • the pose determination module 740 includes:
  • the obtaining unit 741 is configured to predict the pose information based on the target to obtain the target positioning pose information.
  • the fusion positioning method and device are described above, those skilled in the art can understand that the present application should not be limited thereto.
  • the user can flexibly set the fusion positioning method and device according to personal preferences and/or actual application scenarios.
  • the repositioning method may not be limited to corner-based positioning or semantic-based positioning, and the predicted pose information is not limited to For the motion model, the rest of the positioning models can be adopted, as long as high-precision and robust positioning can be finally obtained.
  • the fusion positioning method and device can combine multiple positioning methods to obtain a more accurate and stable positioning result.
  • FIG. 9 shows a structural block diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes: a memory 910 and a processor 920 , and instructions that can be executed on the processor 920 are stored in the memory 910 .
  • the processor 920 executes the instruction, the fusion positioning method in the foregoing embodiment is implemented.
  • the number of the memory 910 and the processor 920 may be one or more.
  • the electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
  • the electronic device may further include a communication interface 930 for communicating with external devices and performing interactive data transmission.
  • the various devices are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired.
  • the processor 920 may process instructions executed within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system).
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 9, but it does not mean that there is only one bus or one type of bus.
  • the memory 910, the processor 920 and the communication interface 930 are integrated on one chip, the memory 910, the processor 920 and the communication interface 930 can communicate with each other through an internal interface.
  • processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. It should be noted that the processor may be a processor supporting an advanced RISC machine (ARM) architecture.
  • ARM advanced RISC machine
  • Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910 ), which stores computer instructions, and when the program is executed by a processor, implements the methods provided in the embodiments of the present application.
  • a computer-readable storage medium such as the above-mentioned memory 910
  • the memory 910 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; data etc.
  • memory 910 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device.
  • memory 910 may optionally include memory located remotely from processor 920, and these remote memories may be connected to the fused location electronic device via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means two or more, unless otherwise expressly and specifically defined.
  • Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code A module, fragment or section of code.
  • the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the above-mentioned integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Abstract

A fusion positioning method and apparatus. The fusion positioning method comprises: collecting a picture related to a surrounding environment, and acquiring repositioning posture information on the basis of the picture related to the surrounding environment (S110); fusing at least one piece of predicted posture information in a predicted posture information queue on the basis of the repositioning posture information, so as to obtain fused posture information, wherein the predicted posture information queue comprises a plurality of pieces of predicted posture information (S120); updating the predicted posture information queue on the basis of the fused posture information (S130); and obtaining target positioning posture information on the basis of the updated predicted posture information queue (S140). By means of the above-mentioned method for fusing various types of positioning, positioning that is high in precision and high in robustness can be realized.

Description

一种融合定位方法及装置A fusion positioning method and device
本申请要求于2020年9月23日提交中国专利局、申请号为202011007686.9、发明名称为“一种融合定位方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011007686.9 and the invention titled "A Fusion Positioning Method and Device" filed with the China Patent Office on September 23, 2020, the entire contents of which are incorporated into this application by reference .
技术领域technical field
本申请涉及定位领域,尤其涉及一种基于语义和角点信息的融合定位方法。The present application relates to the field of positioning, and in particular, to a fusion positioning method based on semantics and corner point information.
背景技术Background technique
随着自动驾驶技术的快速发展,定位功能几乎成为了自动驾驶的必备功能。当前,自动驾驶通常会使用全球定位系统(Global Positioning System,GPS)对自动驾驶进行定位。然而,在室内以及建筑物相对密集的区域,GPS精度很差甚至无法工作,因此,如何为车辆或其余可移动的及其或设备持续提供稳定的、高精度的定位信息就成为需要解决的问题。With the rapid development of autonomous driving technology, the positioning function has almost become an essential function of autonomous driving. Currently, autonomous driving usually uses the Global Positioning System (GPS) to locate the autonomous driving. However, in indoor areas and areas with relatively dense buildings, GPS accuracy is very poor or even unable to work. Therefore, how to continuously provide stable and high-precision positioning information for vehicles or other movable and/or equipment has become a problem that needs to be solved. .
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种融合定位方法及装置,以解决相关技术存在的问题,技术方案如下:The embodiments of the present application provide a fusion positioning method and device to solve the problems existing in the related art, and the technical solutions are as follows:
第一方面,本申请实施例提供了一种融合定位方法,包括:In a first aspect, an embodiment of the present application provides a fusion positioning method, including:
采集周围环境相关的图片,基于该周围环境相关的图片获取重定位位姿信息;Collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
基于该重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,该预测位姿信息队列中包含多个预测位姿信息;Based on the repositioning pose information, at least one predicted pose information in the predicted pose information queue is fused to obtain the fused pose information; wherein, the predicted pose information queue contains multiple predicted pose information;
基于该融合后的位姿信息,更新该预测位姿信息队列;Based on the fused pose information, update the predicted pose information queue;
基于更新后的该预测位姿信息队列,得到目标定位位姿信息。Based on the updated predicted pose information queue, the target positioning pose information is obtained.
第二方面,本申请实施例提供了一种融合定位装置,包括:In a second aspect, an embodiment of the present application provides a fusion positioning device, including:
重定位模块,用于采集周围环境相关的图片,基于该周围环境相关的图片获取重定位位姿信息;The relocation module is used to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
融合模块,用于基于该重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,该预测位姿信息队列中包含多个预测位姿信息;a fusion module, configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain the fused pose information; wherein the predicted pose information queue contains multiple predictions pose information;
更新模块,用于基于该融合后的位姿信息,更新该预测位姿信息队列;an update module for updating the predicted pose information queue based on the fused pose information;
位姿确定模块,用于基于更新后的该预测位姿信息队列,得到目标定位位姿信息。The pose determination module is used for obtaining target positioning pose information based on the updated predicted pose information queue.
第三方面,本申请实施例提供了一种电子设备,该电子设备包括:至少一个处理 器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,以使至少一个处理器能够执行上述融合定位的方法。In a third aspect, an embodiment of the present application provides an electronic device, the electronic device includes: at least one processor; and a memory connected in communication with the at least one processor; wherein the memory stores instructions executable by the at least one processor , so that at least one processor can execute the above method for fusion positioning.
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质存储计算机指令,当计算机指令在计算机上运行时,上述各方面任一种实施方式中的方法被执行。In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the method in any one of the implementation manners of the above aspects is executed.
上述技术方案中的优点或有益效果至少包括:本发明基于周围环境相关的图片,通过不同的定位方法获取重定位位姿信息,可以保证在不同的环境下均能获得重定位位姿信息,保证了高鲁棒性;本发明还利用重定位位姿信息更新预测位姿信息队列,并从更新后的预测位姿信息队列中得到目标定位位姿信息,通过融合基于环境的重定位位姿信息和预测的定位位姿信息,最终得到目标定位位姿信息,能够实现对车辆或是其余可移动机器设备的高精度定位。The advantages or beneficial effects of the above technical solutions include at least: the present invention obtains the repositioning pose information through different positioning methods based on pictures related to the surrounding environment, which can ensure that the repositioning pose information can be obtained in different environments, ensuring that the repositioning pose information can be obtained in different environments. High robustness is achieved; the present invention also uses the repositioning pose information to update the predicted pose information queue, and obtains the target positioning pose information from the updated predicted pose information queue, and by fusing the environment-based repositioning pose information And the predicted positioning pose information, and finally obtain the target positioning pose information, which can realize the high-precision positioning of the vehicle or other movable machinery and equipment.
上述概述仅仅是为了说明书的目的,并不意图以任何方式进行限制。除上述描述的示意性的方面、实施方式和特征之外,通过参考附图和以下的详细描述,本申请进一步的方面、实施方式和特征将会是容易明白的。The above summary is for illustrative purposes only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments and features described above, further aspects, embodiments and features of the present application will become apparent by reference to the drawings and the following detailed description.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in the embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings.
图1为根据本申请一实施例的融合定位方法的示意图;1 is a schematic diagram of a fusion positioning method according to an embodiment of the present application;
图2为根据本申请另一实施例的融合定位方法的流程示意图;2 is a schematic flowchart of a fusion positioning method according to another embodiment of the present application;
图3为根据本申请另一实施例的融合定位方法的示意图;3 is a schematic diagram of a fusion positioning method according to another embodiment of the present application;
图4为根据本申请另一实施例的角点提取示意图;4 is a schematic diagram of corner extraction according to another embodiment of the present application;
图5为根据本申请另一实施例的融合定位方法的示意图;5 is a schematic diagram of a fusion positioning method according to another embodiment of the present application;
图6为根据本申请另一实施例的重定位位姿与预测位姿队列关联示意图;6 is a schematic diagram illustrating the association between a repositioned pose and a predicted pose queue according to another embodiment of the present application;
图7为根据本申请一实施例的融合定位装置的结构框图;7 is a structural block diagram of a fusion positioning apparatus according to an embodiment of the present application;
图8为根据本申请另一实施例的融合定位装置中重定位模块的结构框图;8 is a structural block diagram of a relocation module in a fusion location device according to another embodiment of the present application;
图9是用来实现本申请实施例的融合定位方法的电子设备的框图;9 is a block diagram of an electronic device used to implement the fusion positioning method according to the embodiment of the present application;
图10为根据本申请另一实施例的融合定位装置的结构框图;10 is a structural block diagram of a fusion positioning apparatus according to another embodiment of the present application;
图11为根据本申请另一实施例的融合定位装置中融合模块的结构框图。FIG. 11 is a structural block diagram of a fusion module in a fusion positioning apparatus according to another embodiment of the present application.
具体实施方式detailed description
在下文中,仅简单地描述了某些示例性实施例。正如本领域技术人员可认识到的那样,在不脱离本申请的精神或范围的情况下,可通过各种不同方式修改所描述的实施例。因此,附图和描述被认为本质上是示例性的而非限制性的。In the following, only certain exemplary embodiments are briefly described. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive.
下文的公开提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本申请提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。The following disclosure provides many different embodiments or examples for implementing different structures of the present application. To simplify the disclosure of the present application, the components and arrangements of specific examples are described below. Of course, they are only examples and are not intended to limit the application. Furthermore, this application may repeat reference numerals and/or reference letters in different instances for the purpose of simplicity and clarity, and does not in itself indicate a relationship between the various embodiments and/or arrangements discussed. In addition, this application provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
图1示出根据本申请一实施例的融合定位方法的流程图。如图1所示,该融合定位方法可以包括:FIG. 1 shows a flowchart of a fusion positioning method according to an embodiment of the present application. As shown in Figure 1, the fusion positioning method may include:
步骤S110:采集周围环境相关的图片,基于该周围环境相关的图片获取重定位位姿信息;Step S110: Collect pictures related to the surrounding environment, and obtain repositioning pose information based on the pictures related to the surrounding environment;
步骤S120:基于该重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,该预测位姿信息队列中包含多个预测位姿信息;Step S120: fuse at least one predicted pose information in the predicted pose information queue based on the repositioned pose information to obtain the fused pose information; wherein the predicted pose information queue includes multiple predicted poses information;
步骤S130:基于该融合后的位姿信息,更新该预测位姿信息队列;Step S130: based on the fused pose information, update the predicted pose information queue;
步骤S140:基于更新后的该预测位姿信息队列,得到目标定位位姿信息。Step S140: Based on the updated predicted pose information queue, obtain target positioning pose information.
可选地,该融合定位方法可用于车辆,也可用于机器人等需要随时定位的可移动装置或设备。Optionally, the fusion positioning method can be used for vehicles, and can also be used for mobile devices or equipment that need to be positioned at any time, such as robots.
在一种实施方式中,周围环境相关的图片可以是用于进行角点定位的前视图片,也可以是用于进行语义定位的环视图片。之后,提取图片中的相应信息,基于不同的图片,采取不同的定位方法得到重定位位姿信息,重定位位姿信息中具体可包括重定位位置坐标、重定位方向角和重定位时间,其中,重定位时间为上述周围环境相关的图片对应的采集时间。In one embodiment, the picture related to the surrounding environment may be a front-view picture used for corner localization, or a look-around picture used for semantic localization. After that, the corresponding information in the picture is extracted, and based on different pictures, different positioning methods are adopted to obtain the repositioning pose information. The repositioning pose information may specifically include the repositioning position coordinates, the repositioning direction angle and the repositioning time, wherein , and the relocation time is the acquisition time corresponding to the above picture related to the surrounding environment.
取得重定位位姿信息之后,如图2所示,将其与预测位姿信息队列中的至少一个预测位姿信息进行“关联”,得到融合后的位姿信息,该选取的预测位姿信息也可称为“临时对象”;在一个具体的实施例中,预测位姿信息队列中包含着多个不同时刻的利用运动模型计算出的多个预测位姿信息,该运动模型是一个可以利用t 1时刻的位姿 (包括位置坐标和方向角)、计算参数与运动时间t’,计算出t 1+t’时刻位姿(包括位置坐标和方向角)的模型,每一个预测位姿信息中都可包括预测位置坐标、预测方向角和对应的预测定位时间。根据重定位位姿信息中的重定位时间,找到与重定位时间接近的至少一个预测位姿信息,将上述重定位位姿与其融合,得到融合后的位姿信息。然后,基于该融合后的位姿信息,更新预测位姿信息队列,具体可以是基于该融合后的位姿信息的预测定位时间,利用运动模型和融合后的位姿信息重新计算预测位姿信息队列中时间在预测定位时间之后的预测位姿信息,更新该预测位姿信息队列。 After obtaining the repositioning pose information, as shown in Figure 2, "associate" it with at least one predicted pose information in the predicted pose information queue to obtain the fused pose information, the selected predicted pose information. It may also be referred to as a "temporary object"; in a specific embodiment, the predicted pose information queue contains multiple predicted pose information calculated at different times using a motion model, and the motion model is a The pose (including position coordinates and direction angle) at time t 1 , the calculation parameters and the motion time t', the model of the pose (including position coordinates and direction angle) at time t 1 +t' is calculated, and each predicted pose information can include predicted position coordinates, predicted direction angle and corresponding predicted positioning time. According to the repositioning time in the repositioning pose information, at least one predicted pose information that is close to the repositioning time is found, and the above repositioning pose is fused with it to obtain the fused pose information. Then, based on the fused pose information, update the predicted pose information queue. Specifically, based on the predicted positioning time of the fused pose information, the predicted pose information is recalculated by using the motion model and the fused pose information. The predicted pose information in the queue whose time is after the predicted positioning time, and the predicted pose information queue is updated.
基于更新后的该预测位姿信息队列,得到目标定位位姿信息,例如,基于更新后的预测位姿信息队列,找到预测位姿信息中最新新增的一个预测位姿信息,作为目标定位位姿发布。该实施例中可基于不同的定位方法,获得重定位位姿信息,然后与预测位姿信息融合,更新预测位姿信息队列,并基于更新后的预测位姿信息队列确定最终的目标定位位姿,融合后的定位位姿信息准确性更高,保证了该融合定位方法的高精度;不同的定位方法可以匹配不同的运动行驶环境,确保不管在什么环境下都可以得到重定位位姿信息,保证了该融合定位方法的稳定性、高鲁棒性。Based on the updated predicted pose information queue, the target positioning pose information is obtained. For example, based on the updated predicted pose information queue, the newly added predicted pose information in the predicted pose information is found as the target positioning position. Posture. In this embodiment, the repositioning pose information can be obtained based on different positioning methods, and then fused with the predicted pose information to update the predicted pose information queue, and determine the final target positioning pose based on the updated predicted pose information queue , the accuracy of the fused positioning pose information is higher, which ensures the high precision of the fusion positioning method; different positioning methods can match different moving driving environments, ensuring that the repositioning pose information can be obtained no matter what the environment is. The stability and high robustness of the fusion positioning method are guaranteed.
图3为本申请实施例的一种融合定位方法中,获取重定位位姿信息的一种实现流程图。如图3所示,在一些实施方式中,上述步骤S110中获取重定位位姿信息的过程包括:FIG. 3 is a flowchart of an implementation of obtaining repositioning pose information in a fusion positioning method according to an embodiment of the present application. As shown in FIG. 3 , in some embodiments, the process of acquiring the repositioning pose information in the above step S110 includes:
步骤210:在进行初始定位的情况下,采集前视图片;Step 210: in the case of performing initial positioning, collect a front-view picture;
步骤220:基于该前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将该基于角点的位姿信息作为该重定位位姿信息。Step 220 : Based on the corners in the front-view picture, by matching with the corner map, obtain corner-based pose information, and use the corner-based pose information as the repositioning pose information.
可选地,初始定位可以是行驶或者移动过程中的第一次定位,或是在隔了较长一段时间后的再次定位。Optionally, the initial positioning may be the first positioning during driving or moving, or the second positioning after a long period of time.
在一种实施方式中,在判断为初始定位的情况下,采集前视图片,并提取前视图片中的角点,可选地,可以使用Fast方法进行角点提取,Fast的方法主要是关注像素点附近的圆形窗口上的16个像素角点,具体如图4所示,p为中心像素点,而白框标示的点像素则是需要提取的16个像素角点,用Brief描述子描述提取的该16个像素角点。然后将提取得到的角点与角点地图匹配,得到备选的多个关键帧,例如基于Brief描述子用Bow字典搜索角点地图中得分最高的4个匹配关键帧,该步骤也可被称作暴力匹配;得到备选的多个关键帧后,从中选取最优关键帧,然后基于最优关键帧中的匹配角点确定是否匹配成功,例如选取得分最高且得分高于阀值的角点地图帧,并将 地图中角点与当前的角点使用汉明距离进行匹配,当匹配的最小汉明距离小于阀值则认为匹配成功。最后,基于匹配成功的角点地图,得到重定位位姿信息,具体地,因为角点地图中包括的角点是有3d位置的,可以得到从前视图片中提取的2d角点和角点地图中3d角点的匹配,之后通过pnp方法求解,最终得到当前前视图片在角点地图坐标系下的重定位位姿信息。In one embodiment, when it is determined as initial positioning, a front-view image is collected, and corner points in the front-view image are extracted. Optionally, a Fast method can be used to extract corner points. The Fast method mainly focuses on The 16 pixel corner points on the circular window near the pixel point, as shown in Figure 4, p is the center pixel point, and the point pixel marked by the white box is the 16 pixel corner points that need to be extracted, and the Brief descriptor is used. Describes the extracted 16 pixel corners. Then, the extracted corners are matched with the corner map to obtain multiple candidate keyframes. For example, based on the Brief descriptor, the Bow dictionary is used to search for the four matching keyframes with the highest scores in the corner map. This step can also be called Perform brute force matching; after obtaining multiple candidate keyframes, select the optimal keyframe from them, and then determine whether the matching is successful based on the matching corners in the optimal keyframe, for example, select the corner with the highest score and a score higher than the threshold. Point the map frame, and use the Hamming distance to match the corner points in the map with the current corner points. When the minimum Hamming distance of the matching is less than the threshold, the matching is considered successful. Finally, based on the successfully matched corner map, the repositioning pose information is obtained. Specifically, because the corners included in the corner map have 3D positions, the 2D corner and corner map extracted from the front-view image can be obtained. The matching of the 3d corner points in the middle, and then solved by the pnp method, and finally the repositioning pose information of the current front-view image in the corner map coordinate system is obtained.
可选地,由于在上述匹配过程中存在误差,pnp解算出的重定位位姿信息也可能存在较大误差,因此可基于上述2d角点建立基础矩阵,并和pnp解算的角度向量进行对比,若角度误差大于阀值,则认为pnp解算的位姿有误,定位失败;如果角度误差小于阈值,则返回重定位位姿信息。如图2所示,上述基于前视图片最终得到基于角点的定位的过程也可被概括称为“图像处理”。上述的基于角点的定位方法有着精度高、定位准确的优势,因此在初始定位时选择基于角点的定位方法。Optionally, due to errors in the above matching process, the repositioning pose information calculated by pnp may also have large errors, so a fundamental matrix can be established based on the above 2d corner points and compared with the angle vector calculated by pnp. , if the angle error is greater than the threshold, it is considered that the pose calculated by pnp is wrong, and the positioning fails; if the angle error is less than the threshold, the repositioning pose information is returned. As shown in FIG. 2 , the above-mentioned process of finally obtaining corner-based positioning based on the front-view picture can also be generally referred to as “image processing”. The above-mentioned positioning method based on corner points has the advantages of high precision and accurate positioning, so the positioning method based on corner points is selected in the initial positioning.
如图3所示,在一些实施方式中,上述步骤S110中获取重定位位姿信息的过程还包括:As shown in FIG. 3 , in some embodiments, the process of acquiring the repositioning pose information in the above step S110 further includes:
步骤230:在进行非初始定位的情况下,采集环视图片;Step 230: in the case of performing non-initial positioning, collect a look-around picture;
步骤240:提取该环视图片中的语义特征,通过与语义地图进行匹配,得到基于语义的位姿信息,将该基于语义的位姿信息作为该重定位位姿信息。Step 240: Extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
在一种具体的实施方式中,在已进行过初始定位的情况下,进行非初始定位,采集360°环视图片,之后从360°环视图片中提取语义特征,语义特征可以包括车道线、路沿、车位点中至少一种;如提取出的语义特征是车位点,则在当前的直角坐标系下对车位点的位置进行描述;如果提取出的语义特征是车道线或是路沿,则用极坐标下的距离和角度进行描述。比如,直线用xcosθ+ysinθ-r=0表示,其中θ为垂直向量的角度,r为直线到原点的距离,极坐标下的表示方式为ρcos(θ-α)=r,其中点的坐标为(ρ,θ)。将车位点转换后和车道线(或路沿)可以用同一个滤波方程,其中增益计算公式如下:In a specific implementation, when initial positioning has been performed, non-initial positioning is performed, a 360° look-around picture is collected, and then semantic features are extracted from the 360° look-around image, and the semantic features may include lane lines, road edges , at least one of parking space points; if the extracted semantic feature is a parking space point, the position of the parking space point is described in the current Cartesian coordinate system; if the extracted semantic feature is a lane line or a road edge, then use Distances and angles in polar coordinates are described. For example, a straight line is represented by xcosθ+ysinθ-r=0, where θ is the angle of the vertical vector, r is the distance from the straight line to the origin, and the representation in polar coordinates is ρcos(θ-α)=r, where the coordinates of the point are (ρ, θ). The same filter equation can be used after converting the parking spot and the lane line (or road edge), where the gain calculation formula is as follows:
k=P’ n+1H(HP’ n+1H T+R) -1 k=P' n+1 H(HP' n+1 H T +R) -1
其中,P’ n+1是车辆状态量n+1时刻的协方差,R是观测协方差,H是对应的观测雅克比矩阵,具体地,车位点和车道线(或路沿)的观测雅克比矩阵分别如下: Among them, P' n+1 is the covariance of the vehicle state quantity n+1 time, R is the observation covariance, and H is the corresponding observation Jacobian matrix. The ratio matrices are as follows:
Figure PCTCN2021084792-appb-000001
Figure PCTCN2021084792-appb-000001
上述车位点观测雅各比矩阵中,
Figure PCTCN2021084792-appb-000002
指地图坐标系(世界坐标系)下n时刻的x坐标,同理
Figure PCTCN2021084792-appb-000003
指地图坐标系(世界坐标系)下n时刻的y坐标。
In the above parking spot observation Jacobian matrix,
Figure PCTCN2021084792-appb-000002
Refers to the x-coordinate at time n in the map coordinate system (world coordinate system), and similarly
Figure PCTCN2021084792-appb-000003
Refers to the y coordinate at time n in the map coordinate system (world coordinate system).
Figure PCTCN2021084792-appb-000004
Figure PCTCN2021084792-appb-000004
在将基于上述公式得到的描述信息与语义地图进行比对,得到基于语义定位的位姿信息,如图2所示,上述过程也可被概括称为“图像处理”。由于基于角点的定位方法要求场景纹理相对丰富,否则会出现定位失败的情况,且基于角点的定位方法在角点地图匹配过程中占用资源较多,计算比较耗时,而基于语义的定位方法在一定程度上可以弥补上述缺陷,因此在进行非初始定位时,首选基于语义的定位方法,能够节省计算资源,且较快地得到重定位位姿信息。After comparing the description information obtained based on the above formula with the semantic map, the pose information based on semantic localization is obtained, as shown in FIG. 2 , the above process can also be summarized as “image processing”. Because the corner-based positioning method requires relatively rich scene textures, otherwise the positioning will fail, and the corner-based positioning method occupies a lot of resources in the process of corner map matching, and the calculation is time-consuming, while the semantic-based positioning method The method can make up for the above shortcomings to a certain extent. Therefore, when performing non-initial positioning, the semantic-based positioning method is preferred, which can save computing resources and obtain repositioning pose information faster.
在一些实施方式中,在上述步骤S240之后,还包括:In some embodiments, after the above step S240, the method further includes:
步骤250:判断该环视图片是否有效;Step 250: Determine whether the look-around picture is valid;
步骤260:判断该环视图片无效的情况下,采集前视图片;Step 260: in the case of judging that the surround view picture is invalid, collect the front view picture;
步骤270:基于该前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将该基于角点的位姿信息作为该重定位位姿信息。Step 270: Based on the corners in the front-view picture, by matching with the corner map, obtain corner-based pose information, and use the corner-based pose information as the repositioning pose information.
在一种具体的实施方式中,判断该环视图片是否有效,只要最终无法得到基于语义的位姿信息,即判断环视图片无效,具体可包括:能否基于环视图片提取出语义特征,如果无法提取出语义特征即判断环视图片无效;提取出的语义特征能否与语义地图匹配上,如果无法匹配上即判断环视图片无效。在判断环视图片无效之后,采集前视图片,基于该前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将该基于角点的位姿信息作为该重定位位姿信息。基于前视图片中的角点得到基于角点的位姿信息的具体步骤与上述步骤220相同,在此不再赘述。语义定位的基础识别出语义特征如车道线、路沿、车位点中至少一种,如果环境中恰好没有这些特征,或是识别出的特征与语义地图无法匹配上,就无法进行基于语义的定位,此时就要换上基于角点的定位,保证行驶或移动过程中,能够提供连续的定位,能够得到较为稳定的重定位位姿信息。In a specific embodiment, to determine whether the look-around picture is valid, as long as the semantic-based pose information cannot be obtained, the look-around picture is judged to be invalid, which may specifically include: whether semantic features can be extracted based on the look-around image, and if it cannot be extracted If the semantic features are extracted, the look-around picture is judged to be invalid; whether the extracted semantic features can be matched with the semantic map, if they cannot be matched, the look-around picture is judged to be invalid. After judging that the look-around picture is invalid, the front-view picture is collected, and based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the Relocate pose information. The specific steps of obtaining the pose information based on the corner points based on the corner points in the front-view picture are the same as the above step 220, and are not repeated here. The basis of semantic localization recognizes at least one of semantic features such as lane lines, road edges, and parking spots. If these features happen to be absent in the environment, or the recognized features cannot be matched with the semantic map, semantic-based localization cannot be performed. , at this time, it is necessary to replace the positioning based on corner points to ensure that continuous positioning can be provided during driving or moving, and relatively stable repositioning pose information can be obtained.
在一些实施方式中,如图5所示,在上述步骤S110之前,还包括:In some embodiments, as shown in FIG. 5, before the above step S110, it further includes:
步骤510:获取当前的平面位置;Step 510: obtain the current plane position;
步骤520:根据该平面位置,获得以该平面位置为圆心,限定半径范围内的角点地图和语义地图。Step 520: According to the plane position, obtain a corner map and a semantic map within a limited radius with the plane position as the center.
在一种具体的实施方式中,获取当前的平面位置,可以通过GPS等定位仪器得到大致的平面位置,也可以通过人为在电子地图上点选的方式得到大致的位置,或者,在已经进行过初始定位的情况下,可以基于初始定位的重定位位姿信息得到平面位置。根据平面位置,获得以该平面位置为圆心,限定半径范围内的角点地图和语义地图,其中限定半径可以人为设定。因为不论是进行基于角点定位还是基于语义的定位之前,都需要载入相应的角点地图或是语义地图,如果一开始就载入全部的地图数据,既占存储空间又浪费载入时间,因此可基于大概的位置,下载一定范围内的地图,既节省时间,又降低资源占用。In a specific implementation manner, to obtain the current plane position, the approximate plane position can be obtained through a positioning instrument such as GPS, or the approximate position can be obtained by manually clicking on the electronic map, or, after In the case of initial positioning, the plane position can be obtained based on the repositioned pose information of the initial positioning. According to the plane position, a corner map and a semantic map within a limited radius are obtained with the plane position as the center, and the limited radius can be set manually. Because whether it is corner-based or semantic-based positioning, the corresponding corner map or semantic map needs to be loaded. If all map data is loaded at the beginning, it will take up storage space and waste loading time. Therefore, a map within a certain range can be downloaded based on the approximate location, which not only saves time, but also reduces resource occupation.
在一些实施方式中,该融合定位方法还可以包括:In some embodiments, the fusion positioning method may further include:
基于运动模型计算得到多个不同时刻的预测位姿信息;Calculated based on the motion model to obtain multiple predicted pose information at different times;
基于该多个不同时刻的预测位姿信息生成该预测位姿信息队列。The predicted pose information queue is generated based on the predicted pose information at the multiple different times.
例如,可以利用运动模型计算得到多个不同时刻的预测位姿信息,具体地,该运动模型是一个可以利用t 1时刻的位姿(包括位置坐标和方向角)、计算参数与运动时间t’,计算出t 1+t’时刻车辆位姿(包括位置坐标和方向角)的模型,如图2所示,该运动模型也可以为车辆运动模型,其中的计算参数可包括轮脉冲消息和档位消息,基于轮脉冲消息、档位消息、上一时刻的位姿信息以及从上一时刻起的运动时间,可以得到即时的预测位姿信息。可选地,可以每隔固定时间,就计算一次预测位姿信息,并基于多个不同时刻的预测位姿信息生成预测信息队列。可选地,如果队列中包括的预测位姿信息太多,可以基于其预测定位时间抛弃一些时间上较早的预测位姿信息。基于运动模型的定位方式较为常用,计算也较为简单,但是定位精度不高,将其作为基础,与重定位位姿信息进行融合,可以得到精度更高的定位位姿;将基于运动模型的预测位姿信息以队列的形式储存,可以保证在融合时,选择与重定位位姿信息对应时刻最接近的至少一个预测位姿信息进行融合,得到精度更高的融合定位位姿信息。 For example, the motion model can be used to calculate the predicted pose information at different times. Specifically, the motion model is a motion model that can use the pose (including position coordinates and direction angle), calculation parameters and motion time t' at time t1 . , the model of the vehicle pose (including position coordinates and direction angle) at time t 1 +t' is calculated, as shown in Figure 2, the motion model can also be a vehicle motion model, and the calculation parameters can include wheel pulse messages and gears. Position message, based on wheel pulse message, gear position message, pose information at the last moment, and motion time from the last moment, the real-time predicted pose information can be obtained. Optionally, the predicted pose information may be calculated every fixed time, and a predicted information queue may be generated based on the predicted pose information at multiple different times. Optionally, if too much predicted pose information is included in the queue, some temporally earlier predicted pose information may be discarded based on its predicted positioning time. The positioning method based on the motion model is more commonly used, and the calculation is relatively simple, but the positioning accuracy is not high. Taking it as the basis and fusing it with the repositioning pose information, a more accurate positioning pose can be obtained; the prediction based on the motion model is used. The pose information is stored in the form of a queue, which can ensure that during fusion, at least one predicted pose information that is closest to the time corresponding to the repositioning pose information is selected for fusion, so as to obtain fused positioning pose information with higher accuracy.
在一些实施方式中,该融合定位方法还包括:In some embodiments, the fusion positioning method further includes:
根据该重定位位姿信息的重定位时间,从该预测位姿信息队列获取该重定位时间之前的至少一个预测位姿信息作为第一预测位姿信息,和/或该重定位时间之后的至少一个预测位姿信息作为第二预测位姿信息;具体地,重定位位姿信息中包括重定位时 间,即是重定位位姿信息的获取时刻,基于此,在预测位姿信息队列中确定在此重定位时间之前的预测位姿信息,并确定在此重定位时间之后的预测位姿信息,例如,重定位位姿信息中的重定位时间是AM 8:00,在预测位姿信息队列中,找到所有时间是AM 8:00之前的预测位姿信息,同时找到预测位姿信息队列中所有时间是AM 8:00之后的预测位姿信息。进一步地,在重定位位姿信息的获取时刻之前的预测位姿信息之中,选择一个作为第一预测位姿信息,可选地,选择时间最接近的一个作为第一预测位姿信息,如在AM 8:00之前的预测位姿信息有多个,时间分别是AM 7:40、AM 7:50、AM 7:58,即将AM 7:58的预测位姿信息作为第一预测位姿信息;同理,在重定位位姿信息的获取时刻之后的预测位姿信息之中,选取一个作为第二预测位姿信息,可选地,也选择时间最接近的一个作为第二预测位姿信息,如在AM 8:00之后的预测位姿信息有多个,时间分别是AM 8:40、AM 8:50、AM 8:58,即将AM 8:40的预测位姿信息作为第二预测位姿信息。According to the relocation time of the relocation pose information, obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose information, and/or at least one piece after the relocation time One piece of predicted pose information is used as the second predicted pose information; specifically, the repositioning pose information includes the repositioning time, that is, the acquisition time of the repositioning pose information. Based on this, it is determined in the predicted pose information queue at The predicted pose information before this relocation time, and determine the predicted pose information after this relocation time, for example, the relocation time in the relocation pose information is AM 8:00, in the predicted pose information queue , find all the predicted pose information before AM 8:00, and find all the predicted pose information after AM 8:00 in the predicted pose information queue. Further, among the predicted pose information before the acquisition time of the repositioning pose information, select one as the first predicted pose information, optionally, select the one with the closest time as the first predicted pose information, such as There are multiple predicted pose information before AM 8:00, the time is AM 7:40, AM 7:50, AM 7:58, that is, the predicted pose information of AM 7:58 is used as the first predicted pose information ; Similarly, among the predicted pose information after the acquisition time of the repositioning pose information, select one as the second predicted pose information, optionally, also select the one with the closest time as the second predicted pose information , for example, there are multiple predicted pose information after AM 8:00, the time is AM 8:40, AM 8:50, AM 8:58, that is, the predicted pose information of AM 8:40 is used as the second predicted position posture information.
在一种具体的实施方式中,基于该重定位位姿信息与该第一预测位姿信息和/或第二预测位姿信息进行关联和融合,得到融合后的位姿信息可以分为三种情况。第一种情况,如图2和图6(a)所示,重定位时间在多个预测位姿信息的预测定位时间之间,即可以确定重定位时间之前的第一预测位姿信息t 1和重定位时间之后的第二预测位姿信息t 2,t 1和t 2的位姿增量为δp(δxδyδθ),重定位位姿信息t i和t 2的时间差为δti,t 1和t 2的时间差为δt2,得t 2时刻到t i时刻的位姿增量为δp·t i/t 2,利用线性插值的方法,即得到t 2时刻位姿的递推值;更新协方差,其中,协方差的更新方程为: In a specific embodiment, based on the association and fusion of the repositioned pose information with the first predicted pose information and/or the second predicted pose information, the fused pose information can be divided into three types condition. In the first case, as shown in Fig. 2 and Fig. 6(a), the relocation time is between the predicted location times of multiple predicted pose information, so the first predicted pose information t 1 before the relocation time can be determined. and the second predicted pose information t 2 after the relocation time, the pose increments of t 1 and t 2 are δp(δxδyδθ), and the time difference between the relocation pose information t i and t 2 is δti, t 1 and t The time difference between the _ Among them, the update equation of covariance is:
Figure PCTCN2021084792-appb-000005
Figure PCTCN2021084792-appb-000005
其中f xk可以根据函数CaculateFxk(dxR,PoseTheta,dTheta)计算得到,pk可以根据函数CaculatePk(vehicle_RR,vehicle_RL,PoseTheta,dTheta)得到。最后基于上述位姿的递推值、重定位位姿信息t i和更新的协方差进行EKF融合,得到融合后的位姿信息。 Where f xk can be calculated according to the function CaculateFxk(dxR, PoseTheta, dTheta), and pk can be obtained according to the function CaculatePk(vehicle_RR, vehicle_RL, PoseTheta, dTheta). Finally, EKF fusion is performed based on the recursive value of the above pose, the repositioned pose information t i and the updated covariance, and the fused pose information is obtained.
第二种情况,是重定位位姿信息t i时间过于早,在预测位姿信息队列中所有预测位姿信息对应的预测定位时间之前,如图2和图6(b)所示,此时,获取重定位位姿信息之后的预测位姿信息t 2,直接丢弃重定位位姿信息,将t 2作为融合后的位姿信息。可选地,t 2也是预测位姿信息队列中最早的一个预测位姿信息。 The second case is that the time for relocating the pose information t i is too early, before the predicted positioning time corresponding to all the predicted pose information in the predicted pose information queue, as shown in Figure 2 and Figure 6(b), at this time , obtain the predicted pose information t 2 after the repositioning pose information, directly discard the repositioning pose information, and use t 2 as the fused pose information. Optionally, t 2 is also the earliest piece of predicted pose information in the predicted pose information queue.
最后一种情况,是重定位位姿信息t i时间过于晚,在在预测位姿信息队列中所有 预测位姿信息对应的预测定位时间之后,即,仅可以确定时间在重定位时间之前的第一预测位姿信息t 1,如图2和图6(c)所示,此时,如果重定位时间与第一预测位姿信息t 1的预测定位时间的差值大于给定阈值,则直接丢弃重定位位姿信息t i,将第一预测位姿信息t 1作为融合后的位姿信息;如果重定位时间与第一预测位姿信息t 1的时间差小于等于给定阈值,则从预测位姿信息队列中再选出时间在第一预测位姿信息之前的一个预测位姿信息t’,然后结合第一预测位姿信息t 1和t’,利用线性插值的方法,得到t 1时刻的位姿的递推值,得到递推值之后,更新协方差,具体计算递推值和更新协方差的步骤参考上述第一种情况,最后基于上述位姿的递推值、重定位位姿信息t i和更新的协方差进行EKF融合,得到融合后的位姿信息。 The last case is that the time of relocating the pose information t i is too late, after the predicted positioning time corresponding to all the predicted pose information in the predicted pose information queue, that is, only the first time before the relocation time can be determined. A predicted pose information t 1 , as shown in Figure 2 and Figure 6(c), at this time, if the difference between the relocation time and the predicted positioning time of the first predicted pose information t 1 is greater than a given threshold, then directly Discard the repositioning pose information t i , and use the first predicted pose information t 1 as the fused pose information; if the time difference between the repositioning time and the first predicted pose information t 1 is less than or equal to a given threshold, then predict from In the pose information queue, a predicted pose information t' whose time is before the first predicted pose information is selected, and then combined with the first predicted pose information t 1 and t ', the linear interpolation method is used to obtain the time t 1 The recursive value of the pose is obtained. After the recursive value is obtained, the covariance is updated. The specific steps of calculating the recursive value and updating the covariance refer to the first case above. Finally, based on the recursive value of the above pose and repositioning the pose The information t i and the updated covariance are EKF fused to obtain the fused pose information.
在一些实施方式中,更新该预测位姿信息队列,还包括:将该预测位姿信息队列中,该融合后的位姿信息对应时间之后的其他预测位姿信息进行更新,具体地,更新步骤中涉及利用含有轮脉冲消息和档位消息的运动模型,得到更新后的预测位姿信息队列。例如,得到融合后的位姿信息之后,找到预测位姿信息队列中,所有预测定位时间发生在融合后的位姿信息对应时间之后的预测位姿信息,如在得到预测定位时间为AM 8:40的融合后的位姿信息之后,将其插入预测位姿队列,再利用运动模型,重新计算预测位姿信息队列中,AM 8:50、AM 8:58时刻的预测位姿信息。可选地,还可根据融合后的位姿信息,发布地图点云。In some embodiments, updating the predicted pose information queue further includes: in the predicted pose information queue, the fused pose information corresponds to other predicted pose information after the time. Specifically, the updating step It involves using the motion model containing the wheel pulse message and the gear position message to obtain the updated predicted pose information queue. For example, after obtaining the fused pose information, find the predicted pose information in the queue of predicted pose information, all the predicted pose information whose predicted positioning time occurs after the corresponding time of the fused pose information, for example, when the predicted positioning time is AM 8: After the fused pose information of 40, insert it into the predicted pose queue, and then use the motion model to recalculate the predicted pose information at AM 8:50 and AM 8:58 in the predicted pose information queue. Optionally, the map point cloud can also be released according to the fused pose information.
在一些实施方式中,上述步骤S140中还包括:In some embodiments, the above step S140 further includes:
基于更新后的该预测位姿信息队列,得到目标定位位姿信息。具体地,如图2中所示,基于更新后的预测位姿信息队列,找到预测位姿信息中最新新增的一个预测位姿信息,作为目标定位位姿发布。可选地,在发布之前,还可以将最新的预测位姿信息与对应的过程噪声、雅可比矩阵等一起存储。Based on the updated predicted pose information queue, the target positioning pose information is obtained. Specifically, as shown in FIG. 2 , based on the updated predicted pose information queue, a newly added predicted pose information in the predicted pose information is found and released as the target positioning pose. Optionally, before publishing, the latest predicted pose information may also be stored together with the corresponding process noise, Jacobian matrix, and the like.
图7示出根据本申请一实施例的融合定位装置700的结构框图。如图7所示,该装置可以包括:FIG. 7 shows a structural block diagram of a fusion positioning apparatus 700 according to an embodiment of the present application. As shown in Figure 7, the apparatus may include:
重定位模块710,用于采集周围环境相关的图片,基于该周围环境相关的图片获取重定位位姿信息;A relocation module 710, configured to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
融合模块720,用于基于该重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,该预测位姿信息队列中包含多个预测位姿信息;A fusion module 720, configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; wherein the predicted pose information queue contains multiple Predicted pose information;
更新模块730,用于基于该融合后的位姿信息,更新该预测位姿信息队列;an update module 730, configured to update the predicted pose information queue based on the fused pose information;
位姿确定模块740,用于基于更新后的该预测位姿信息队列,得到目标定位位姿信息。The pose determination module 740 is configured to obtain target positioning pose information based on the updated predicted pose information queue.
在一种实施例中,如图8所示,重定位模块710包括:In one embodiment, as shown in FIG. 8, the relocation module 710 includes:
第一前视图片采集单元711,用于在进行初始定位的情况下,采集前视图片;The first front-view picture acquisition unit 711 is configured to acquire a front-view picture under the condition of initial positioning;
第一角点定位单元712,用于基于该前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将该基于角点的位姿信息作为该重定位位姿信息。The first corner locating unit 712 is configured to obtain corner-based pose information by matching with the corner map based on the corners in the front-view picture, and use the corner-based pose information as the relocation. pose information.
在一种实施例中,如图8所示,重定位模块710还包括:In one embodiment, as shown in FIG. 8 , the relocation module 710 further includes:
环视图片采集单元713,用于在进行非初始定位的情况下,采集环视图片;A look-around picture acquisition unit 713, configured to collect a look-around picture in the case of performing non-initial positioning;
语义定位单元714,用于提取该环视图片中的语义特征,通过与语义地图进行匹配,得到基于语义的位姿信息,将该基于语义的位姿信息作为该重定位位姿信息。The semantic positioning unit 714 is configured to extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
在一种实施例中,如图8所示,重定位模块710还包括:In one embodiment, as shown in FIG. 8 , the relocation module 710 further includes:
判断单元715,用于判断该环视图片是否有效;Judging unit 715, for judging whether the look-around picture is valid;
第二前视图片采集单元716,用于在判断该环视图片无效的情况下,采集前视图片;The second front-view picture acquisition unit 716 is configured to acquire a front-view picture when it is judged that the surround-view picture is invalid;
第二角点定位单元717,用于基于该前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将该基于角点的位姿信息作为该重定位位姿信息。The second corner locating unit 717 is configured to obtain corner-based pose information by matching with the corner map based on the corners in the front-view picture, and use the corner-based pose information as the relocation pose information.
在一种实施例中,如图10所示,该融合定位装置700还包括:In an embodiment, as shown in FIG. 10 , the fusion positioning apparatus 700 further includes:
位置获取模块750,用于获取当前的平面位置;a position obtaining module 750, used for obtaining the current plane position;
地图载入模块760,用于根据该平面位置,获得以该平面位置为圆心,限定半径范围内的角点地图和语义地图。The map loading module 760 is configured to obtain, according to the plane position, a corner point map and a semantic map within a limited radius with the plane position as the center of the circle.
在一种实施例中,该融合定位装置700还包括:In one embodiment, the fusion positioning apparatus 700 further includes:
基于运动模型计算得到多个不同时刻的预测位姿信息;Calculated based on the motion model to obtain multiple predicted pose information at different times;
基于该多个不同时刻的预测位姿信息生成该预测位姿信息队列。The predicted pose information queue is generated based on the predicted pose information at the multiple different times.
在一种实施例中,如图11所示,融合模块720还包括:In one embodiment, as shown in FIG. 11 , the fusion module 720 further includes:
预测位姿信息选择单元721,用于根据该重定位位姿信息的重定位时间,从该预测位姿信息队列获取该重定位时间之前的至少一个预测位姿信息作为第一预测位姿信息,和/或该重定位时间之后的至少一个预测位姿信息作为第二预测位姿信息;The predicted pose information selection unit 721 is configured to obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose information according to the relocation time of the relocated pose information, And/or at least one predicted pose information after the relocation time is used as the second predicted pose information;
预测位姿信息融合单元722,用于基于该重定位位姿信息与该第一预测位姿信息和/或第二预测位姿信息进行融合,得到融合后的位姿信息。The predicted pose information fusion unit 722 is configured to fuse the first predicted pose information and/or the second predicted pose information based on the repositioned pose information to obtain fused pose information.
在一种实施例中,更新模块730还包括:In one embodiment, the update module 730 further includes:
更新队列单元731,用于将该预测位姿信息队列中,该融合后的位姿信息对应时间 之后的其他预测位姿信息进行更新,得到更新后的预测位姿信息队列。The updating queue unit 731 is configured to update other predicted pose information after the corresponding time in the predicted pose information queue, the fused pose information, to obtain an updated predicted pose information queue.
在一种实施例中,位姿确定模块740包括:In one embodiment, the pose determination module 740 includes:
获得单元741,用于基于该目标预测位姿信息,得到目标定位位姿信息。The obtaining unit 741 is configured to predict the pose information based on the target to obtain the target positioning pose information.
本申请实施例各装置中的各模块的功能可以参见上述方法中的对应描述,在此不再赘述。For the functions of each module in each device in this embodiment of the present application, reference may be made to the corresponding description in the foregoing method, and details are not described herein again.
需要说明的是,尽管介绍了融合定位方法和装置如上,但本领域技术人员能够理解,本申请应不限于此。事实上,用户完全可根据个人喜好和/或实际应用场景灵活设定融合定位方法和装置,比如重定位的方法可以不限于基于角点的定位或是基于语义的定位,预测位姿信息不限于运动模型,可以采取其余的定位模型,只要最终可以获得高精度,高鲁棒性的定位即可。It should be noted that although the fusion positioning method and device are described above, those skilled in the art can understand that the present application should not be limited thereto. In fact, the user can flexibly set the fusion positioning method and device according to personal preferences and/or actual application scenarios. For example, the repositioning method may not be limited to corner-based positioning or semantic-based positioning, and the predicted pose information is not limited to For the motion model, the rest of the positioning models can be adopted, as long as high-precision and robust positioning can be finally obtained.
这样,通过融合定位,根据本申请上述实施例的融合定位方法和装置能够将多种定位方法进行结合,得到更精确、更稳定的定位结果。In this way, through fusion positioning, the fusion positioning method and device according to the above embodiments of the present application can combine multiple positioning methods to obtain a more accurate and stable positioning result.
图9示出根据本申请一实施例的电子设备的结构框图。如图9所示,该电子设备包括:存储器910和处理器920,存储器910内存储有可在处理器920上运行的指令。处理器920执行该指令时实现上述实施例中的融合定位方法。存储器910和处理器920的数量可以为一个或多个。该电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。FIG. 9 shows a structural block diagram of an electronic device according to an embodiment of the present application. As shown in FIG. 9 , the electronic device includes: a memory 910 and a processor 920 , and instructions that can be executed on the processor 920 are stored in the memory 910 . When the processor 920 executes the instruction, the fusion positioning method in the foregoing embodiment is implemented. The number of the memory 910 and the processor 920 may be one or more. The electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
该电子设备还可以包括通信接口930,用于与外界设备进行通信,进行数据交互传输。各个设备利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器920可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图9中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The electronic device may further include a communication interface 930 for communicating with external devices and performing interactive data transmission. The various devices are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired. The processor 920 may process instructions executed within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired. Likewise, multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system). The bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 9, but it does not mean that there is only one bus or one type of bus.
可选的,在具体实现上,如果存储器910、处理器920及通信接口930集成在一块芯片上,则存储器910、处理器920及通信接口930可以通过内部接口完成相互间的通信。Optionally, in specific implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on one chip, the memory 910, the processor 920 and the communication interface 930 can communicate with each other through an internal interface.
应理解的是,上述处理器可以是中央处理器(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。值得说明的是,处理器可以是支持进阶精简指令集机器(advanced RISC machines,ARM)架构的处理器。It should be understood that the above-mentioned processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It should be noted that the processor may be a processor supporting an advanced RISC machine (ARM) architecture.
本申请实施例提供了一种计算机可读存储介质(如上述的存储器910),其存储有计算机指令,该程序被处理器执行时实现本申请实施例中提供的方法。Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910 ), which stores computer instructions, and when the program is executed by a processor, implements the methods provided in the embodiments of the present application.
可选的,存储器910可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据融合定位的电子设备的使用所创建的数据等。此外,存储器910可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器910可选包括相对于处理器920远程设置的存储器,这些远程存储器可以通过网络连接至融合定位的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。Optionally, the memory 910 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; data etc. Additionally, memory 910 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 910 may optionally include memory located remotely from processor 920, and these remote memories may be connected to the fused location electronic device via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包括于本申请的至少一个实施例或示例中。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means two or more, unless otherwise expressly and specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括 一个或多个(两个或两个以上)用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分。并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能。Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process. A module, fragment or section of code. Also, the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment.
应理解的是,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。上述实施例方法的全部或部分步骤是可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。It should be understood that various parts of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method in the above-mentioned embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium. When the program is executed, it includes one of the steps of the method embodiment or its combination.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。上述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读存储介质中。该存储介质可以是只读存储器,磁盘或光盘等。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the above-mentioned integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到其各种变化或替换,这些都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. Any person skilled in the art can easily think of various changes or replacements within the technical scope disclosed in the present application. , which should be covered within the scope of protection of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

  1. 一种融合定位方法,其特征在于,所述方法包括:A fusion positioning method, characterized in that the method comprises:
    采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息;Collect pictures related to the surrounding environment, and obtain repositioning pose information based on the pictures related to the surrounding environment;
    基于所述重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,所述预测位姿信息队列中包含多个预测位姿信息;At least one predicted pose information in the predicted pose information queue is fused based on the repositioned pose information to obtain the fused pose information; wherein the predicted pose information queue includes multiple predicted pose information ;
    基于所述融合后的位姿信息,更新所述预测位姿信息队列;updating the predicted pose information queue based on the fused pose information;
    基于更新后的所述预测位姿信息队列,得到目标定位位姿信息。Based on the updated predicted pose information queue, target positioning pose information is obtained.
  2. 根据权利要求1所述的融合定位方法,其特征在于,采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息,包括:The fusion positioning method according to claim 1, wherein collecting pictures related to the surrounding environment, and obtaining relocation pose information based on the pictures related to the surrounding environment, comprising:
    在进行初始定位的情况下,采集前视图片;In the case of initial positioning, the front-view picture is collected;
    基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。Based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the repositioning pose information.
  3. 根据权利要求1所述的融合定位方法,其特征在于,采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息,包括:The fusion positioning method according to claim 1, wherein collecting pictures related to the surrounding environment, and obtaining relocation pose information based on the pictures related to the surrounding environment, comprising:
    在进行非初始定位的情况下,采集环视图片;In the case of non-initial positioning, collect look-around pictures;
    提取所述环视图片中的语义特征,通过与语义地图进行匹配,得到基于语义的位姿信息,将所述基于语义的位姿信息作为所述重定位位姿信息。Semantic features in the look-around picture are extracted, and semantic-based pose information is obtained by matching with a semantic map, and the semantic-based pose information is used as the relocation pose information.
  4. 根据权利要求3所述的融合定位方法,所述方法还包括:The fusion positioning method according to claim 3, the method further comprises:
    判断所述环视图片是否有效;determine whether the look-around picture is valid;
    在判断所述环视图片无效的情况下,采集前视图片;In the case of judging that the look-around picture is invalid, collect the front-view picture;
    基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。Based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the repositioning pose information.
  5. 根据权利要求2-4任一项所述的融合定位方法,其特征在于,所述方法还包括:The fusion positioning method according to any one of claims 2-4, wherein the method further comprises:
    获取当前的平面位置;Get the current plane position;
    根据所述平面位置,获得以所述平面位置为圆心,限定半径范围内的角点地图和语义地图。According to the plane position, a corner point map and a semantic map within a limited radius are obtained with the plane position as the center.
  6. 根据权利要求1所述的融合定位方法,其特征在于,所述方法还包括:The fusion positioning method according to claim 1, wherein the method further comprises:
    基于运动模型计算得到多个不同时刻的预测位姿信息;Calculated based on the motion model to obtain multiple predicted pose information at different times;
    基于所述多个不同时刻的预测位姿信息生成所述预测位姿信息队列。The predicted pose information queue is generated based on the predicted pose information at the multiple different times.
  7. 根据权利要求1所述的融合定位方法,其特征在于,基于所述重定位位姿信息 对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息,包括:fusion positioning method according to claim 1, it is characterised in that, based on the repositioning position information At least one predicted position information in the predicted position information queue is fused to obtain the fused position information, including:
    根据所述重定位位姿信息的重定位时间,从所述预测位姿信息队列获取所述重定位时间之前的至少一个预测位姿信息作为第一预测位姿信息,和/或所述重定位时间之后的至少一个预测位姿信息作为第二预测位姿信息;According to the relocation time of the relocation pose information, at least one piece of predicted pose information before the relocation time is obtained from the predicted pose information queue as the first predicted pose information, and/or the relocation at least one predicted pose information after time is used as the second predicted pose information;
    基于所述重定位位姿信息与所述第一预测位姿信息和/或第二预测位姿信息进行融合,得到融合后的位姿信息。Based on the relocation pose information and the first predicted pose information and/or the second predicted pose information, the fused pose information is obtained.
  8. 根据权利要求7所述的方法,其特征在于,所述更新所述预测位姿信息队列,还包括:The method according to claim 7, wherein the updating the predicted pose information queue further comprises:
    将所述预测位姿信息队列中,所述融合后的位姿信息对应时间之后的其他预测位姿信息进行更新,得到更新后的预测位姿信息队列。In the predicted pose information queue, the fused pose information corresponding to other predicted pose information after the time is updated to obtain an updated predicted pose information queue.
  9. 根据权利要求1所述的融合定位方法,其特征在于,基于更新后的所述预测位姿信息队列,得到目标定位位姿信息,包括:The fusion positioning method according to claim 1, wherein, obtaining target positioning pose information based on the updated predicted pose information queue, comprising:
    将更新后的所述预测位姿信息队列中最新生成的预测位姿信息作为目标定位位姿信息。The newly generated predicted pose information in the updated predicted pose information queue is used as the target positioning pose information.
  10. 一种融合定位装置,其特征在于,所述装置包括:A fusion positioning device, characterized in that the device comprises:
    重定位模块,用于采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息;a relocation module, configured to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
    融合模块,用于基于所述重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,所述预测位姿信息队列中包含多个预测位姿信息;The fusion module is configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain the fused pose information; wherein the predicted pose information queue contains multiple A predicted pose information;
    更新模块,用于基于所述融合后的位姿信息,更新所述预测位姿信息队列;an update module, configured to update the predicted pose information queue based on the fused pose information;
    位姿确定模块,用于基于更新后的所述预测位姿信息队列,得到目标定位位姿信息。The pose determination module is configured to obtain target positioning pose information based on the updated predicted pose information queue.
  11. 根据权利要求10所述的装置,其特征在于,所述重定位模块包括:The apparatus according to claim 10, wherein the relocation module comprises:
    第一前视图片采集单元,用于在进行初始定位的情况下,采集前视图片;a first front-view picture acquisition unit, configured to acquire a front-view picture under the condition of initial positioning;
    第一角点定位单元,用于基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。The first corner point positioning unit is used to obtain corner point-based pose information by matching with the corner point map based on the corner points in the front-view picture, and using the corner point-based pose information as the Relocate pose information.
  12. 根据权利要求10所述的装置,其特征在于,所述重定位模块包括:The apparatus according to claim 10, wherein the relocation module comprises:
    环视图片采集单元,用于在进行非初始定位的情况下,采集环视图片;A look-around picture acquisition unit, used to collect a look-around picture in the case of non-initial positioning;
    语义定位单元,用于提取所述环视图片中的语义特征,通过与语义地图进行匹配,得到基于语义的位姿信息,将所述基于语义的位姿信息作为所述重定位位姿信息。The semantic positioning unit is used to extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
  13. 根据权利要求12所述的装置,其特征在于,所述重定位模块还包括:The apparatus according to claim 12, wherein the relocation module further comprises:
    判断单元,用于判断所述环视图片是否有效;a judging unit for judging whether the look-around picture is valid;
    第二前视图片采集单元,用于在判断所述环视图片无效的情况下,采集前视图片;a second front-view picture acquisition unit, configured to acquire a front-view picture when it is judged that the surround-view picture is invalid;
    第二角点定位单元,用于基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。The second corner point positioning unit is configured to obtain corner point-based pose information by matching with the corner point map based on the corner points in the front-view picture, and use the corner point-based pose information as the Relocate pose information.
  14. 根据权利要求11-13任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 11-13, wherein the device further comprises:
    位置获取模块,用于获取当前的平面位置;The position obtaining module is used to obtain the current plane position;
    地图载入模块,用于根据所述平面位置,获得以所述平面位置为圆心,限定半径范围内的角点地图和语义地图。The map loading module is configured to obtain, according to the plane position, a corner point map and a semantic map within a limited radius with the plane position as the center.
  15. 根据权利要求10所述的装置,其特征在于,所述装置还包括:The apparatus of claim 10, wherein the apparatus further comprises:
    基于运动模型计算得到多个不同时刻的预测位姿信息;Calculated based on the motion model to obtain multiple predicted pose information at different times;
    基于所述多个不同时刻的预测位姿信息生成所述预测位姿信息队列。The predicted pose information queue is generated based on the predicted pose information at the multiple different times.
  16. 根据权利要求10所述的装置,其特征在于,所述融合模块还包括:The device according to claim 10, wherein the fusion module further comprises:
    预测位姿信息选择单元,用于根据所述重定位位姿信息的重定位时间,从所述预测位姿信息队列获取所述重定位时间之前的至少一个预测位姿信息作为第一预测位姿信息,和/或所述重定位时间之后的至少一个预测位姿信息作为第二预测位姿信息;A predicted pose information selection unit, configured to obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose according to the relocation time of the relocated pose information information, and/or at least one predicted pose information after the relocation time as the second predicted pose information;
    预测位姿信息融合单元,用于基于所述重定位位姿信息与所述第一预测位姿信息和/或第二预测位姿信息进行融合,得到融合后的位姿信息。A predicted pose information fusion unit, configured to fuse the first predicted pose information and/or the second predicted pose information based on the repositioned pose information to obtain the fused pose information.
  17. 根据权利要求16所述的装置,其特征在于,所述更新模块还包括:The apparatus according to claim 16, wherein the update module further comprises:
    更新队列单元,用于将所述预测位姿信息队列中,所述融合后的位姿信息对应时间之后的其他预测位姿信息进行更新,得到更新后的预测位姿信息队列。The updating queue unit is configured to update other predicted pose information after the corresponding time of the fused pose information in the predicted pose information queue to obtain an updated predicted pose information queue.
  18. 根据权利要求10所述的装置,其特征在于,所述位姿确定模块还包括:The device according to claim 10, wherein the pose determination module further comprises:
    获得单元,用于将更新后的所述预测位姿信息队列中最新生成的预测位姿信息作为目标定位位姿信息。The obtaining unit is configured to use the newly generated predicted pose information in the updated predicted pose information queue as the target positioning pose information.
  19. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    至少一个处理器;以及at least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一 个处理器执行,以使所述至少一个处理器能够执行权利要求1-9中任一项所述的方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the execution of any of claims 1-9 Methods.
  20. 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机指令,所述计算机指令被处理器执行时实现如权利要求1-9中任一项所述的方法。A computer-readable storage medium having computer instructions stored therein, the computer instructions implementing the method according to any one of claims 1-9 when executed by a processor.
PCT/CN2021/084792 2020-09-23 2021-03-31 Fusion positioning method and apparatus WO2022062355A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011007686.9 2020-09-23
CN202011007686.9A CN112150550B (en) 2020-09-23 2020-09-23 Fusion positioning method and device

Publications (1)

Publication Number Publication Date
WO2022062355A1 true WO2022062355A1 (en) 2022-03-31

Family

ID=73897813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084792 WO2022062355A1 (en) 2020-09-23 2021-03-31 Fusion positioning method and apparatus

Country Status (2)

Country Link
CN (1) CN112150550B (en)
WO (1) WO2022062355A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150550B (en) * 2020-09-23 2021-07-27 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN114295126B (en) * 2021-12-20 2023-12-26 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method based on inertial measurement unit

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109945858A (en) * 2019-03-20 2019-06-28 浙江零跑科技有限公司 It parks the multi-sensor fusion localization method of Driving Scene for low speed
CN110109465A (en) * 2019-05-29 2019-08-09 集美大学 A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle
CN110231028A (en) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 Aircraft navigation methods, devices and systems
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN111274974A (en) * 2020-01-21 2020-06-12 北京百度网讯科技有限公司 Positioning element detection method, device, equipment and medium
CN112150550A (en) * 2020-09-23 2020-12-29 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8930023B2 (en) * 2009-11-06 2015-01-06 Irobot Corporation Localization by learning of wave-signal distributions
US10953545B2 (en) * 2018-08-13 2021-03-23 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN110147705B (en) * 2018-08-28 2021-05-04 北京初速度科技有限公司 Vehicle positioning method based on visual perception and electronic equipment
CN110118554B (en) * 2019-05-16 2021-07-16 达闼机器人有限公司 SLAM method, apparatus, storage medium and device based on visual inertia
CN110986988B (en) * 2019-12-20 2023-12-08 上海有个机器人有限公司 Track calculation method, medium, terminal and device integrating multi-sensor data
CN111098335B (en) * 2019-12-26 2021-06-08 浙江欣奕华智能科技有限公司 Method and device for calibrating odometer of double-wheel differential drive robot
CN111174782B (en) * 2019-12-31 2021-09-17 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111272165B (en) * 2020-02-27 2020-10-30 清华大学 Intelligent vehicle positioning method based on characteristic point calibration
CN111536964B (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Robot positioning method and device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110231028A (en) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 Aircraft navigation methods, devices and systems
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109945858A (en) * 2019-03-20 2019-06-28 浙江零跑科技有限公司 It parks the multi-sensor fusion localization method of Driving Scene for low speed
CN110109465A (en) * 2019-05-29 2019-08-09 集美大学 A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle
CN111274974A (en) * 2020-01-21 2020-06-12 北京百度网讯科技有限公司 Positioning element detection method, device, equipment and medium
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN112150550A (en) * 2020-09-23 2020-12-29 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device

Also Published As

Publication number Publication date
CN112150550B (en) 2021-07-27
CN112150550A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN109297510B (en) Relative pose calibration method, device, equipment and medium
US11074706B2 (en) Accommodating depth noise in visual slam using map-point consensus
US20210312209A1 (en) Vehicle information detection method, electronic device and storage medium
WO2019062651A1 (en) Localization and mapping method and system
WO2018098811A1 (en) Localization method and device
WO2022062355A1 (en) Fusion positioning method and apparatus
KR20210036317A (en) Mobile edge computing based visual positioning method and device
US11436709B2 (en) Three-dimensional reconstruction method, electronic device and storage medium
CN116255992A (en) Method and device for simultaneously positioning and mapping
CN111462029A (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN112086010A (en) Map generation method, map generation device, map generation equipment and storage medium
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
JP7214803B2 (en) Building positioning method, device, electronic device, storage medium, program, and terminal device
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
JP2021111404A (en) Cross-camera obstacle tracking method, device, apparatus, system, and medium
EP3716103A2 (en) Method and apparatus for determining transformation matrix, and non-transitory computer-readable recording medium
CN111368927A (en) Method, device and equipment for processing labeling result and storage medium
WO2022252482A1 (en) Robot, and environment map construction method and apparatus therefor
CN113015117B (en) User positioning method and device, electronic equipment and storage medium
JPH07146121A (en) Recognition method and device for three dimensional position and attitude based on vision
CN116295466A (en) Map generation method, map generation device, electronic device, storage medium and vehicle
CN109410304B (en) Projection determination method, device and equipment
CN113126117A (en) Method for determining absolute scale of SFM map and electronic equipment
CN111398961B (en) Method and apparatus for detecting obstacles
CN116012624B (en) Positioning method, positioning device, electronic equipment, medium and automatic driving equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21870744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21870744

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21870744

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023)