WO2022062355A1 - Fusion positioning method and apparatus - Google Patents
Fusion positioning method and apparatus Download PDFInfo
- Publication number
- WO2022062355A1 WO2022062355A1 PCT/CN2021/084792 CN2021084792W WO2022062355A1 WO 2022062355 A1 WO2022062355 A1 WO 2022062355A1 CN 2021084792 W CN2021084792 W CN 2021084792W WO 2022062355 A1 WO2022062355 A1 WO 2022062355A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pose information
- predicted
- predicted pose
- queue
- positioning
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 230000004927 fusion Effects 0.000 title claims abstract description 55
- 230000015654 memory Effects 0.000 claims description 21
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000004807 localization Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Definitions
- the present application relates to the field of positioning, and in particular, to a fusion positioning method based on semantics and corner point information.
- GPS Global Positioning System
- the embodiments of the present application provide a fusion positioning method and device to solve the problems existing in the related art, and the technical solutions are as follows:
- an embodiment of the present application provides a fusion positioning method, including:
- At least one predicted pose information in the predicted pose information queue is fused to obtain the fused pose information; wherein, the predicted pose information queue contains multiple predicted pose information;
- the target positioning pose information is obtained.
- an embodiment of the present application provides a fusion positioning device, including:
- the relocation module is used to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
- a fusion module configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain the fused pose information; wherein the predicted pose information queue contains multiple predictions pose information;
- an update module for updating the predicted pose information queue based on the fused pose information
- the pose determination module is used for obtaining target positioning pose information based on the updated predicted pose information queue.
- an embodiment of the present application provides an electronic device, the electronic device includes: at least one processor; and a memory connected in communication with the at least one processor; wherein the memory stores instructions executable by the at least one processor , so that at least one processor can execute the above method for fusion positioning.
- embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the method in any one of the implementation manners of the above aspects is executed.
- the present invention obtains the repositioning pose information through different positioning methods based on pictures related to the surrounding environment, which can ensure that the repositioning pose information can be obtained in different environments, ensuring that the repositioning pose information can be obtained in different environments. High robustness is achieved; the present invention also uses the repositioning pose information to update the predicted pose information queue, and obtains the target positioning pose information from the updated predicted pose information queue, and by fusing the environment-based repositioning pose information And the predicted positioning pose information, and finally obtain the target positioning pose information, which can realize the high-precision positioning of the vehicle or other movable machinery and equipment.
- FIG. 1 is a schematic diagram of a fusion positioning method according to an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a fusion positioning method according to another embodiment of the present application.
- FIG. 3 is a schematic diagram of a fusion positioning method according to another embodiment of the present application.
- FIG. 4 is a schematic diagram of corner extraction according to another embodiment of the present application.
- FIG. 5 is a schematic diagram of a fusion positioning method according to another embodiment of the present application.
- FIG. 6 is a schematic diagram illustrating the association between a repositioned pose and a predicted pose queue according to another embodiment of the present application.
- FIG. 7 is a structural block diagram of a fusion positioning apparatus according to an embodiment of the present application.
- FIG. 8 is a structural block diagram of a relocation module in a fusion location device according to another embodiment of the present application.
- FIG. 9 is a block diagram of an electronic device used to implement the fusion positioning method according to the embodiment of the present application.
- FIG. 10 is a structural block diagram of a fusion positioning apparatus according to another embodiment of the present application.
- FIG. 11 is a structural block diagram of a fusion module in a fusion positioning apparatus according to another embodiment of the present application.
- FIG. 1 shows a flowchart of a fusion positioning method according to an embodiment of the present application.
- the fusion positioning method may include:
- Step S110 Collect pictures related to the surrounding environment, and obtain repositioning pose information based on the pictures related to the surrounding environment;
- Step S120 fuse at least one predicted pose information in the predicted pose information queue based on the repositioned pose information to obtain the fused pose information; wherein the predicted pose information queue includes multiple predicted poses information;
- Step S130 based on the fused pose information, update the predicted pose information queue
- Step S140 Based on the updated predicted pose information queue, obtain target positioning pose information.
- the fusion positioning method can be used for vehicles, and can also be used for mobile devices or equipment that need to be positioned at any time, such as robots.
- the picture related to the surrounding environment may be a front-view picture used for corner localization, or a look-around picture used for semantic localization.
- the corresponding information in the picture is extracted, and based on different pictures, different positioning methods are adopted to obtain the repositioning pose information.
- the repositioning pose information may specifically include the repositioning position coordinates, the repositioning direction angle and the repositioning time, wherein , and the relocation time is the acquisition time corresponding to the above picture related to the surrounding environment.
- the predicted pose information queue contains multiple predicted pose information calculated at different times using a motion model, and the motion model is a The pose (including position coordinates and direction angle) at time t 1 , the calculation parameters and the motion time t', the model of the pose (including position coordinates and direction angle) at time t 1 +t' is calculated, and each predicted pose information can include predicted position coordinates, predicted direction angle and corresponding predicted positioning time.
- the repositioning time in the repositioning pose information at least one predicted pose information that is close to the repositioning time is found, and the above repositioning pose is fused with it to obtain the fused pose information. Then, based on the fused pose information, update the predicted pose information queue. Specifically, based on the predicted positioning time of the fused pose information, the predicted pose information is recalculated by using the motion model and the fused pose information. The predicted pose information in the queue whose time is after the predicted positioning time, and the predicted pose information queue is updated.
- the target positioning pose information is obtained. For example, based on the updated predicted pose information queue, the newly added predicted pose information in the predicted pose information is found as the target positioning position. Posture.
- the repositioning pose information can be obtained based on different positioning methods, and then fused with the predicted pose information to update the predicted pose information queue, and determine the final target positioning pose based on the updated predicted pose information queue , the accuracy of the fused positioning pose information is higher, which ensures the high precision of the fusion positioning method; different positioning methods can match different moving driving environments, ensuring that the repositioning pose information can be obtained no matter what the environment is. The stability and high robustness of the fusion positioning method are guaranteed.
- FIG. 3 is a flowchart of an implementation of obtaining repositioning pose information in a fusion positioning method according to an embodiment of the present application.
- the process of acquiring the repositioning pose information in the above step S110 includes:
- Step 210 in the case of performing initial positioning, collect a front-view picture
- Step 220 Based on the corners in the front-view picture, by matching with the corner map, obtain corner-based pose information, and use the corner-based pose information as the repositioning pose information.
- the initial positioning may be the first positioning during driving or moving, or the second positioning after a long period of time.
- a front-view image is collected, and corner points in the front-view image are extracted.
- a Fast method can be used to extract corner points.
- the Fast method mainly focuses on The 16 pixel corner points on the circular window near the pixel point, as shown in Figure 4, p is the center pixel point, and the point pixel marked by the white box is the 16 pixel corner points that need to be extracted, and the Brief descriptor is used. Describes the extracted 16 pixel corners. Then, the extracted corners are matched with the corner map to obtain multiple candidate keyframes. For example, based on the Brief descriptor, the Bow dictionary is used to search for the four matching keyframes with the highest scores in the corner map.
- This step can also be called Perform brute force matching; after obtaining multiple candidate keyframes, select the optimal keyframe from them, and then determine whether the matching is successful based on the matching corners in the optimal keyframe, for example, select the corner with the highest score and a score higher than the threshold. Point the map frame, and use the Hamming distance to match the corner points in the map with the current corner points. When the minimum Hamming distance of the matching is less than the threshold, the matching is considered successful. Finally, based on the successfully matched corner map, the repositioning pose information is obtained. Specifically, because the corners included in the corner map have 3D positions, the 2D corner and corner map extracted from the front-view image can be obtained. The matching of the 3d corner points in the middle, and then solved by the pnp method, and finally the repositioning pose information of the current front-view image in the corner map coordinate system is obtained.
- the repositioning pose information calculated by pnp may also have large errors, so a fundamental matrix can be established based on the above 2d corner points and compared with the angle vector calculated by pnp. , if the angle error is greater than the threshold, it is considered that the pose calculated by pnp is wrong, and the positioning fails; if the angle error is less than the threshold, the repositioning pose information is returned.
- the above-mentioned process of finally obtaining corner-based positioning based on the front-view picture can also be generally referred to as “image processing”.
- the above-mentioned positioning method based on corner points has the advantages of high precision and accurate positioning, so the positioning method based on corner points is selected in the initial positioning.
- the process of acquiring the repositioning pose information in the above step S110 further includes:
- Step 230 in the case of performing non-initial positioning, collect a look-around picture
- Step 240 Extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
- a 360° look-around picture is collected, and then semantic features are extracted from the 360° look-around image, and the semantic features may include lane lines, road edges , at least one of parking space points; if the extracted semantic feature is a parking space point, the position of the parking space point is described in the current Cartesian coordinate system; if the extracted semantic feature is a lane line or a road edge, then use Distances and angles in polar coordinates are described.
- the same filter equation can be used after converting the parking spot and the lane line (or road edge), where the gain calculation formula is as follows:
- P' n+1 is the covariance of the vehicle state quantity n+1 time
- R is the observation covariance
- H is the corresponding observation Jacobian matrix.
- the pose information based on semantic localization is obtained, as shown in FIG. 2 , the above process can also be summarized as “image processing”.
- image processing Because the corner-based positioning method requires relatively rich scene textures, otherwise the positioning will fail, and the corner-based positioning method occupies a lot of resources in the process of corner map matching, and the calculation is time-consuming, while the semantic-based positioning method The method can make up for the above shortcomings to a certain extent. Therefore, when performing non-initial positioning, the semantic-based positioning method is preferred, which can save computing resources and obtain repositioning pose information faster.
- the method further includes:
- Step 250 Determine whether the look-around picture is valid
- Step 260 in the case of judging that the surround view picture is invalid, collect the front view picture
- Step 270 Based on the corners in the front-view picture, by matching with the corner map, obtain corner-based pose information, and use the corner-based pose information as the repositioning pose information.
- the look-around picture is judged to be invalid, which may specifically include: whether semantic features can be extracted based on the look-around image, and if it cannot be extracted If the semantic features are extracted, the look-around picture is judged to be invalid; whether the extracted semantic features can be matched with the semantic map, if they cannot be matched, the look-around picture is judged to be invalid.
- the front-view picture is collected, and based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the Relocate pose information.
- the specific steps of obtaining the pose information based on the corner points based on the corner points in the front-view picture are the same as the above step 220, and are not repeated here.
- the basis of semantic localization recognizes at least one of semantic features such as lane lines, road edges, and parking spots. If these features happen to be absent in the environment, or the recognized features cannot be matched with the semantic map, semantic-based localization cannot be performed. , at this time, it is necessary to replace the positioning based on corner points to ensure that continuous positioning can be provided during driving or moving, and relatively stable repositioning pose information can be obtained.
- step S110 further includes:
- Step 510 obtain the current plane position
- Step 520 According to the plane position, obtain a corner map and a semantic map within a limited radius with the plane position as the center.
- the approximate plane position can be obtained through a positioning instrument such as GPS, or the approximate position can be obtained by manually clicking on the electronic map, or, after In the case of initial positioning, the plane position can be obtained based on the repositioned pose information of the initial positioning.
- a corner map and a semantic map within a limited radius are obtained with the plane position as the center, and the limited radius can be set manually. Because whether it is corner-based or semantic-based positioning, the corresponding corner map or semantic map needs to be loaded. If all map data is loaded at the beginning, it will take up storage space and waste loading time. Therefore, a map within a certain range can be downloaded based on the approximate location, which not only saves time, but also reduces resource occupation.
- the fusion positioning method may further include:
- the predicted pose information queue is generated based on the predicted pose information at the multiple different times.
- the motion model can be used to calculate the predicted pose information at different times.
- the motion model is a motion model that can use the pose (including position coordinates and direction angle), calculation parameters and motion time t' at time t1 .
- the model of the vehicle pose (including position coordinates and direction angle) at time t 1 +t' is calculated, as shown in Figure 2, the motion model can also be a vehicle motion model, and the calculation parameters can include wheel pulse messages and gears.
- Position message based on wheel pulse message, gear position message, pose information at the last moment, and motion time from the last moment, the real-time predicted pose information can be obtained.
- the predicted pose information may be calculated every fixed time, and a predicted information queue may be generated based on the predicted pose information at multiple different times.
- some temporally earlier predicted pose information may be discarded based on its predicted positioning time.
- the positioning method based on the motion model is more commonly used, and the calculation is relatively simple, but the positioning accuracy is not high. Taking it as the basis and fusing it with the repositioning pose information, a more accurate positioning pose can be obtained; the prediction based on the motion model is used.
- the pose information is stored in the form of a queue, which can ensure that during fusion, at least one predicted pose information that is closest to the time corresponding to the repositioning pose information is selected for fusion, so as to obtain fused positioning pose information with higher accuracy.
- the fusion positioning method further includes:
- the relocation time of the relocation pose information obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose information, and/or at least one piece after the relocation time
- One piece of predicted pose information is used as the second predicted pose information; specifically, the repositioning pose information includes the repositioning time, that is, the acquisition time of the repositioning pose information.
- the repositioning pose information includes the repositioning time, that is, the acquisition time of the repositioning pose information.
- the relocation time in the relocation pose information is AM 8:00, in the predicted pose information queue , find all the predicted pose information before AM 8:00, and find all the predicted pose information after AM 8:00 in the predicted pose information queue.
- select one as the first predicted pose information selects one with the closest time as the first predicted pose information, such as There are multiple predicted pose information before AM 8:00, the time is AM 7:40, AM 7:50, AM 7:58, that is, the predicted pose information of AM 7:58 is used as the first predicted pose information ;
- select one as the second predicted pose information optionally, also select the one with the closest time as the second predicted pose information , for example, there are multiple predicted pose information after AM 8:00, the time is AM 8:40, AM 8:50, AM 8:58, that is, the predicted pose information of AM 8:40 is used as the second predicted position posture information.
- the fused pose information can be divided into three types condition.
- the relocation time is between the predicted location times of multiple predicted pose information, so the first predicted pose information t 1 before the relocation time can be determined.
- the second predicted pose information t 2 after the relocation time the pose increments of t 1 and t 2 are ⁇ p( ⁇ x ⁇ y ⁇ ), and the time difference between the relocation pose information t i and t 2 is ⁇ ti, t 1 and t
- the update equation of covariance is:
- f xk can be calculated according to the function CaculateFxk(dxR, PoseTheta, dTheta), and pk can be obtained according to the function CaculatePk(vehicle_RR, vehicle_RL, PoseTheta, dTheta).
- EKF fusion is performed based on the recursive value of the above pose, the repositioned pose information t i and the updated covariance, and the fused pose information is obtained.
- the second case is that the time for relocating the pose information t i is too early, before the predicted positioning time corresponding to all the predicted pose information in the predicted pose information queue, as shown in Figure 2 and Figure 6(b), at this time , obtain the predicted pose information t 2 after the repositioning pose information, directly discard the repositioning pose information, and use t 2 as the fused pose information.
- t 2 is also the earliest piece of predicted pose information in the predicted pose information queue.
- a predicted pose information t 1 as shown in Figure 2 and Figure 6(c), at this time, if the difference between the relocation time and the predicted positioning time of the first predicted pose information t 1 is greater than a given threshold, then directly Discard the repositioning pose information t i , and use the first predicted pose information t 1 as the fused pose information; if the time difference between the repositioning time and the first predicted pose information t 1 is less than or equal to a given threshold, then predict from In the pose information queue, a predicted pose information t' whose time is before the first predicted pose information is selected, and then combined with the first predicted pose information t 1 and t ', the linear interpolation method is used to obtain the time t 1 The recursive value of the pose is obtained.
- the covariance is updated.
- the specific steps of calculating the recursive value and updating the covariance refer to the first case above.
- the information t i and the updated covariance are EKF fused to obtain the fused pose information.
- updating the predicted pose information queue further includes: in the predicted pose information queue, the fused pose information corresponds to other predicted pose information after the time.
- the updating step It involves using the motion model containing the wheel pulse message and the gear position message to obtain the updated predicted pose information queue. For example, after obtaining the fused pose information, find the predicted pose information in the queue of predicted pose information, all the predicted pose information whose predicted positioning time occurs after the corresponding time of the fused pose information, for example, when the predicted positioning time is AM 8: After the fused pose information of 40, insert it into the predicted pose queue, and then use the motion model to recalculate the predicted pose information at AM 8:50 and AM 8:58 in the predicted pose information queue.
- the map point cloud can also be released according to the fused pose information.
- step S140 further includes:
- the target positioning pose information is obtained. Specifically, as shown in FIG. 2 , based on the updated predicted pose information queue, a newly added predicted pose information in the predicted pose information is found and released as the target positioning pose.
- the latest predicted pose information may also be stored together with the corresponding process noise, Jacobian matrix, and the like.
- FIG. 7 shows a structural block diagram of a fusion positioning apparatus 700 according to an embodiment of the present application.
- the apparatus may include:
- a relocation module 710 configured to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;
- a fusion module 720 configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain fused pose information; wherein the predicted pose information queue contains multiple Predicted pose information;
- an update module 730 configured to update the predicted pose information queue based on the fused pose information
- the pose determination module 740 is configured to obtain target positioning pose information based on the updated predicted pose information queue.
- the relocation module 710 includes:
- the first front-view picture acquisition unit 711 is configured to acquire a front-view picture under the condition of initial positioning
- the first corner locating unit 712 is configured to obtain corner-based pose information by matching with the corner map based on the corners in the front-view picture, and use the corner-based pose information as the relocation. pose information.
- the relocation module 710 further includes:
- a look-around picture acquisition unit 713 configured to collect a look-around picture in the case of performing non-initial positioning
- the semantic positioning unit 714 is configured to extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
- the relocation module 710 further includes:
- Judging unit 715 for judging whether the look-around picture is valid
- the second front-view picture acquisition unit 716 is configured to acquire a front-view picture when it is judged that the surround-view picture is invalid;
- the second corner locating unit 717 is configured to obtain corner-based pose information by matching with the corner map based on the corners in the front-view picture, and use the corner-based pose information as the relocation pose information.
- the fusion positioning apparatus 700 further includes:
- a position obtaining module 750 used for obtaining the current plane position
- the map loading module 760 is configured to obtain, according to the plane position, a corner point map and a semantic map within a limited radius with the plane position as the center of the circle.
- the fusion positioning apparatus 700 further includes:
- the predicted pose information queue is generated based on the predicted pose information at the multiple different times.
- the fusion module 720 further includes:
- the predicted pose information selection unit 721 is configured to obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose information according to the relocation time of the relocated pose information, And/or at least one predicted pose information after the relocation time is used as the second predicted pose information;
- the predicted pose information fusion unit 722 is configured to fuse the first predicted pose information and/or the second predicted pose information based on the repositioned pose information to obtain fused pose information.
- the update module 730 further includes:
- the updating queue unit 731 is configured to update other predicted pose information after the corresponding time in the predicted pose information queue, the fused pose information, to obtain an updated predicted pose information queue.
- the pose determination module 740 includes:
- the obtaining unit 741 is configured to predict the pose information based on the target to obtain the target positioning pose information.
- the fusion positioning method and device are described above, those skilled in the art can understand that the present application should not be limited thereto.
- the user can flexibly set the fusion positioning method and device according to personal preferences and/or actual application scenarios.
- the repositioning method may not be limited to corner-based positioning or semantic-based positioning, and the predicted pose information is not limited to For the motion model, the rest of the positioning models can be adopted, as long as high-precision and robust positioning can be finally obtained.
- the fusion positioning method and device can combine multiple positioning methods to obtain a more accurate and stable positioning result.
- FIG. 9 shows a structural block diagram of an electronic device according to an embodiment of the present application.
- the electronic device includes: a memory 910 and a processor 920 , and instructions that can be executed on the processor 920 are stored in the memory 910 .
- the processor 920 executes the instruction, the fusion positioning method in the foregoing embodiment is implemented.
- the number of the memory 910 and the processor 920 may be one or more.
- the electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
- the electronic device may further include a communication interface 930 for communicating with external devices and performing interactive data transmission.
- the various devices are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired.
- the processor 920 may process instructions executed within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface.
- multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
- multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system).
- the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 9, but it does not mean that there is only one bus or one type of bus.
- the memory 910, the processor 920 and the communication interface 930 are integrated on one chip, the memory 910, the processor 920 and the communication interface 930 can communicate with each other through an internal interface.
- processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- a general purpose processor may be a microprocessor or any conventional processor or the like. It should be noted that the processor may be a processor supporting an advanced RISC machine (ARM) architecture.
- ARM advanced RISC machine
- Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910 ), which stores computer instructions, and when the program is executed by a processor, implements the methods provided in the embodiments of the present application.
- a computer-readable storage medium such as the above-mentioned memory 910
- the memory 910 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; data etc.
- memory 910 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device.
- memory 910 may optionally include memory located remotely from processor 920, and these remote memories may be connected to the fused location electronic device via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
- first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
- plurality means two or more, unless otherwise expressly and specifically defined.
- Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
- a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
- a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
- a module, fragment or section of code A module, fragment or section of code.
- the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
- each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the above-mentioned integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
- the storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
Abstract
Description
Claims (20)
- 一种融合定位方法,其特征在于,所述方法包括:A fusion positioning method, characterized in that the method comprises:采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息;Collect pictures related to the surrounding environment, and obtain repositioning pose information based on the pictures related to the surrounding environment;基于所述重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,所述预测位姿信息队列中包含多个预测位姿信息;At least one predicted pose information in the predicted pose information queue is fused based on the repositioned pose information to obtain the fused pose information; wherein the predicted pose information queue includes multiple predicted pose information ;基于所述融合后的位姿信息,更新所述预测位姿信息队列;updating the predicted pose information queue based on the fused pose information;基于更新后的所述预测位姿信息队列,得到目标定位位姿信息。Based on the updated predicted pose information queue, target positioning pose information is obtained.
- 根据权利要求1所述的融合定位方法,其特征在于,采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息,包括:The fusion positioning method according to claim 1, wherein collecting pictures related to the surrounding environment, and obtaining relocation pose information based on the pictures related to the surrounding environment, comprising:在进行初始定位的情况下,采集前视图片;In the case of initial positioning, the front-view picture is collected;基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。Based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the repositioning pose information.
- 根据权利要求1所述的融合定位方法,其特征在于,采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息,包括:The fusion positioning method according to claim 1, wherein collecting pictures related to the surrounding environment, and obtaining relocation pose information based on the pictures related to the surrounding environment, comprising:在进行非初始定位的情况下,采集环视图片;In the case of non-initial positioning, collect look-around pictures;提取所述环视图片中的语义特征,通过与语义地图进行匹配,得到基于语义的位姿信息,将所述基于语义的位姿信息作为所述重定位位姿信息。Semantic features in the look-around picture are extracted, and semantic-based pose information is obtained by matching with a semantic map, and the semantic-based pose information is used as the relocation pose information.
- 根据权利要求3所述的融合定位方法,所述方法还包括:The fusion positioning method according to claim 3, the method further comprises:判断所述环视图片是否有效;determine whether the look-around picture is valid;在判断所述环视图片无效的情况下,采集前视图片;In the case of judging that the look-around picture is invalid, collect the front-view picture;基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。Based on the corners in the front-view picture, the corner-based pose information is obtained by matching with the corner map, and the corner-based pose information is used as the repositioning pose information.
- 根据权利要求2-4任一项所述的融合定位方法,其特征在于,所述方法还包括:The fusion positioning method according to any one of claims 2-4, wherein the method further comprises:获取当前的平面位置;Get the current plane position;根据所述平面位置,获得以所述平面位置为圆心,限定半径范围内的角点地图和语义地图。According to the plane position, a corner point map and a semantic map within a limited radius are obtained with the plane position as the center.
- 根据权利要求1所述的融合定位方法,其特征在于,所述方法还包括:The fusion positioning method according to claim 1, wherein the method further comprises:基于运动模型计算得到多个不同时刻的预测位姿信息;Calculated based on the motion model to obtain multiple predicted pose information at different times;基于所述多个不同时刻的预测位姿信息生成所述预测位姿信息队列。The predicted pose information queue is generated based on the predicted pose information at the multiple different times.
- 根据权利要求1所述的融合定位方法,其特征在于,基于所述重定位位姿信息 对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息,包括:fusion positioning method according to claim 1, it is characterised in that, based on the repositioning position information At least one predicted position information in the predicted position information queue is fused to obtain the fused position information, including:根据所述重定位位姿信息的重定位时间,从所述预测位姿信息队列获取所述重定位时间之前的至少一个预测位姿信息作为第一预测位姿信息,和/或所述重定位时间之后的至少一个预测位姿信息作为第二预测位姿信息;According to the relocation time of the relocation pose information, at least one piece of predicted pose information before the relocation time is obtained from the predicted pose information queue as the first predicted pose information, and/or the relocation at least one predicted pose information after time is used as the second predicted pose information;基于所述重定位位姿信息与所述第一预测位姿信息和/或第二预测位姿信息进行融合,得到融合后的位姿信息。Based on the relocation pose information and the first predicted pose information and/or the second predicted pose information, the fused pose information is obtained.
- 根据权利要求7所述的方法,其特征在于,所述更新所述预测位姿信息队列,还包括:The method according to claim 7, wherein the updating the predicted pose information queue further comprises:将所述预测位姿信息队列中,所述融合后的位姿信息对应时间之后的其他预测位姿信息进行更新,得到更新后的预测位姿信息队列。In the predicted pose information queue, the fused pose information corresponding to other predicted pose information after the time is updated to obtain an updated predicted pose information queue.
- 根据权利要求1所述的融合定位方法,其特征在于,基于更新后的所述预测位姿信息队列,得到目标定位位姿信息,包括:The fusion positioning method according to claim 1, wherein, obtaining target positioning pose information based on the updated predicted pose information queue, comprising:将更新后的所述预测位姿信息队列中最新生成的预测位姿信息作为目标定位位姿信息。The newly generated predicted pose information in the updated predicted pose information queue is used as the target positioning pose information.
- 一种融合定位装置,其特征在于,所述装置包括:A fusion positioning device, characterized in that the device comprises:重定位模块,用于采集周围环境相关的图片,基于所述周围环境相关的图片获取重定位位姿信息;a relocation module, configured to collect pictures related to the surrounding environment, and obtain relocation pose information based on the pictures related to the surrounding environment;融合模块,用于基于所述重定位位姿信息对预测位姿信息队列中的至少一个预测位姿信息进行融合,得到融合后的位姿信息;其中,所述预测位姿信息队列中包含多个预测位姿信息;The fusion module is configured to fuse at least one predicted pose information in the predicted pose information queue based on the repositioning pose information to obtain the fused pose information; wherein the predicted pose information queue contains multiple A predicted pose information;更新模块,用于基于所述融合后的位姿信息,更新所述预测位姿信息队列;an update module, configured to update the predicted pose information queue based on the fused pose information;位姿确定模块,用于基于更新后的所述预测位姿信息队列,得到目标定位位姿信息。The pose determination module is configured to obtain target positioning pose information based on the updated predicted pose information queue.
- 根据权利要求10所述的装置,其特征在于,所述重定位模块包括:The apparatus according to claim 10, wherein the relocation module comprises:第一前视图片采集单元,用于在进行初始定位的情况下,采集前视图片;a first front-view picture acquisition unit, configured to acquire a front-view picture under the condition of initial positioning;第一角点定位单元,用于基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。The first corner point positioning unit is used to obtain corner point-based pose information by matching with the corner point map based on the corner points in the front-view picture, and using the corner point-based pose information as the Relocate pose information.
- 根据权利要求10所述的装置,其特征在于,所述重定位模块包括:The apparatus according to claim 10, wherein the relocation module comprises:环视图片采集单元,用于在进行非初始定位的情况下,采集环视图片;A look-around picture acquisition unit, used to collect a look-around picture in the case of non-initial positioning;语义定位单元,用于提取所述环视图片中的语义特征,通过与语义地图进行匹配,得到基于语义的位姿信息,将所述基于语义的位姿信息作为所述重定位位姿信息。The semantic positioning unit is used to extract the semantic features in the look-around picture, obtain semantic-based pose information by matching with the semantic map, and use the semantic-based pose information as the relocation pose information.
- 根据权利要求12所述的装置,其特征在于,所述重定位模块还包括:The apparatus according to claim 12, wherein the relocation module further comprises:判断单元,用于判断所述环视图片是否有效;a judging unit for judging whether the look-around picture is valid;第二前视图片采集单元,用于在判断所述环视图片无效的情况下,采集前视图片;a second front-view picture acquisition unit, configured to acquire a front-view picture when it is judged that the surround-view picture is invalid;第二角点定位单元,用于基于所述前视图片中的角点,通过与角点地图进行匹配,得到基于角点的位姿信息,将所述基于角点的位姿信息作为所述重定位位姿信息。The second corner point positioning unit is configured to obtain corner point-based pose information by matching with the corner point map based on the corner points in the front-view picture, and use the corner point-based pose information as the Relocate pose information.
- 根据权利要求11-13任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 11-13, wherein the device further comprises:位置获取模块,用于获取当前的平面位置;The position obtaining module is used to obtain the current plane position;地图载入模块,用于根据所述平面位置,获得以所述平面位置为圆心,限定半径范围内的角点地图和语义地图。The map loading module is configured to obtain, according to the plane position, a corner point map and a semantic map within a limited radius with the plane position as the center.
- 根据权利要求10所述的装置,其特征在于,所述装置还包括:The apparatus of claim 10, wherein the apparatus further comprises:基于运动模型计算得到多个不同时刻的预测位姿信息;Calculated based on the motion model to obtain multiple predicted pose information at different times;基于所述多个不同时刻的预测位姿信息生成所述预测位姿信息队列。The predicted pose information queue is generated based on the predicted pose information at the multiple different times.
- 根据权利要求10所述的装置,其特征在于,所述融合模块还包括:The device according to claim 10, wherein the fusion module further comprises:预测位姿信息选择单元,用于根据所述重定位位姿信息的重定位时间,从所述预测位姿信息队列获取所述重定位时间之前的至少一个预测位姿信息作为第一预测位姿信息,和/或所述重定位时间之后的至少一个预测位姿信息作为第二预测位姿信息;A predicted pose information selection unit, configured to obtain at least one predicted pose information before the relocation time from the predicted pose information queue as the first predicted pose according to the relocation time of the relocated pose information information, and/or at least one predicted pose information after the relocation time as the second predicted pose information;预测位姿信息融合单元,用于基于所述重定位位姿信息与所述第一预测位姿信息和/或第二预测位姿信息进行融合,得到融合后的位姿信息。A predicted pose information fusion unit, configured to fuse the first predicted pose information and/or the second predicted pose information based on the repositioned pose information to obtain the fused pose information.
- 根据权利要求16所述的装置,其特征在于,所述更新模块还包括:The apparatus according to claim 16, wherein the update module further comprises:更新队列单元,用于将所述预测位姿信息队列中,所述融合后的位姿信息对应时间之后的其他预测位姿信息进行更新,得到更新后的预测位姿信息队列。The updating queue unit is configured to update other predicted pose information after the corresponding time of the fused pose information in the predicted pose information queue to obtain an updated predicted pose information queue.
- 根据权利要求10所述的装置,其特征在于,所述位姿确定模块还包括:The device according to claim 10, wherein the pose determination module further comprises:获得单元,用于将更新后的所述预测位姿信息队列中最新生成的预测位姿信息作为目标定位位姿信息。The obtaining unit is configured to use the newly generated predicted pose information in the updated predicted pose information queue as the target positioning pose information.
- 一种电子设备,其特征在于,包括:An electronic device, comprising:至少一个处理器;以及at least one processor; and与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一 个处理器执行,以使所述至少一个处理器能够执行权利要求1-9中任一项所述的方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the execution of any of claims 1-9 Methods.
- 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机指令,所述计算机指令被处理器执行时实现如权利要求1-9中任一项所述的方法。A computer-readable storage medium having computer instructions stored therein, the computer instructions implementing the method according to any one of claims 1-9 when executed by a processor.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011007686.9 | 2020-09-23 | ||
CN202011007686.9A CN112150550B (en) | 2020-09-23 | 2020-09-23 | Fusion positioning method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022062355A1 true WO2022062355A1 (en) | 2022-03-31 |
Family
ID=73897813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/084792 WO2022062355A1 (en) | 2020-09-23 | 2021-03-31 | Fusion positioning method and apparatus |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112150550B (en) |
WO (1) | WO2022062355A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112150550B (en) * | 2020-09-23 | 2021-07-27 | 华人运通(上海)自动驾驶科技有限公司 | Fusion positioning method and device |
CN114295126B (en) * | 2021-12-20 | 2023-12-26 | 华人运通(上海)自动驾驶科技有限公司 | Fusion positioning method based on inertial measurement unit |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109945858A (en) * | 2019-03-20 | 2019-06-28 | 浙江零跑科技有限公司 | It parks the multi-sensor fusion localization method of Driving Scene for low speed |
CN110109465A (en) * | 2019-05-29 | 2019-08-09 | 集美大学 | A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle |
CN110231028A (en) * | 2018-03-05 | 2019-09-13 | 北京京东尚科信息技术有限公司 | Aircraft navigation methods, devices and systems |
CN111220154A (en) * | 2020-01-22 | 2020-06-02 | 北京百度网讯科技有限公司 | Vehicle positioning method, device, equipment and medium |
CN111274974A (en) * | 2020-01-21 | 2020-06-12 | 北京百度网讯科技有限公司 | Positioning element detection method, device, equipment and medium |
CN112150550A (en) * | 2020-09-23 | 2020-12-29 | 华人运通(上海)自动驾驶科技有限公司 | Fusion positioning method and device |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8930023B2 (en) * | 2009-11-06 | 2015-01-06 | Irobot Corporation | Localization by learning of wave-signal distributions |
US10953545B2 (en) * | 2018-08-13 | 2021-03-23 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for autonomous navigation using visual sparse map |
CN110147705B (en) * | 2018-08-28 | 2021-05-04 | 北京初速度科技有限公司 | Vehicle positioning method based on visual perception and electronic equipment |
CN110118554B (en) * | 2019-05-16 | 2021-07-16 | 达闼机器人有限公司 | SLAM method, apparatus, storage medium and device based on visual inertia |
CN110986988B (en) * | 2019-12-20 | 2023-12-08 | 上海有个机器人有限公司 | Track calculation method, medium, terminal and device integrating multi-sensor data |
CN111098335B (en) * | 2019-12-26 | 2021-06-08 | 浙江欣奕华智能科技有限公司 | Method and device for calibrating odometer of double-wheel differential drive robot |
CN111174782B (en) * | 2019-12-31 | 2021-09-17 | 智车优行科技(上海)有限公司 | Pose estimation method and device, electronic equipment and computer readable storage medium |
CN111272165B (en) * | 2020-02-27 | 2020-10-30 | 清华大学 | Intelligent vehicle positioning method based on characteristic point calibration |
CN111536964B (en) * | 2020-07-09 | 2020-11-06 | 浙江大华技术股份有限公司 | Robot positioning method and device, and storage medium |
-
2020
- 2020-09-23 CN CN202011007686.9A patent/CN112150550B/en active Active
-
2021
- 2021-03-31 WO PCT/CN2021/084792 patent/WO2022062355A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110231028A (en) * | 2018-03-05 | 2019-09-13 | 北京京东尚科信息技术有限公司 | Aircraft navigation methods, devices and systems |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109945858A (en) * | 2019-03-20 | 2019-06-28 | 浙江零跑科技有限公司 | It parks the multi-sensor fusion localization method of Driving Scene for low speed |
CN110109465A (en) * | 2019-05-29 | 2019-08-09 | 集美大学 | A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle |
CN111274974A (en) * | 2020-01-21 | 2020-06-12 | 北京百度网讯科技有限公司 | Positioning element detection method, device, equipment and medium |
CN111220154A (en) * | 2020-01-22 | 2020-06-02 | 北京百度网讯科技有限公司 | Vehicle positioning method, device, equipment and medium |
CN112150550A (en) * | 2020-09-23 | 2020-12-29 | 华人运通(上海)自动驾驶科技有限公司 | Fusion positioning method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112150550B (en) | 2021-07-27 |
CN112150550A (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109297510B (en) | Relative pose calibration method, device, equipment and medium | |
US11074706B2 (en) | Accommodating depth noise in visual slam using map-point consensus | |
US20210312209A1 (en) | Vehicle information detection method, electronic device and storage medium | |
WO2019062651A1 (en) | Localization and mapping method and system | |
WO2018098811A1 (en) | Localization method and device | |
WO2022062355A1 (en) | Fusion positioning method and apparatus | |
KR20210036317A (en) | Mobile edge computing based visual positioning method and device | |
US11436709B2 (en) | Three-dimensional reconstruction method, electronic device and storage medium | |
CN116255992A (en) | Method and device for simultaneously positioning and mapping | |
CN111462029A (en) | Visual point cloud and high-precision map fusion method and device and electronic equipment | |
CN112086010A (en) | Map generation method, map generation device, map generation equipment and storage medium | |
CN110648363A (en) | Camera posture determining method and device, storage medium and electronic equipment | |
JP7214803B2 (en) | Building positioning method, device, electronic device, storage medium, program, and terminal device | |
CN112950710A (en) | Pose determination method and device, electronic equipment and computer readable storage medium | |
JP2021111404A (en) | Cross-camera obstacle tracking method, device, apparatus, system, and medium | |
EP3716103A2 (en) | Method and apparatus for determining transformation matrix, and non-transitory computer-readable recording medium | |
CN111368927A (en) | Method, device and equipment for processing labeling result and storage medium | |
WO2022252482A1 (en) | Robot, and environment map construction method and apparatus therefor | |
CN113015117B (en) | User positioning method and device, electronic equipment and storage medium | |
JPH07146121A (en) | Recognition method and device for three dimensional position and attitude based on vision | |
CN116295466A (en) | Map generation method, map generation device, electronic device, storage medium and vehicle | |
CN109410304B (en) | Projection determination method, device and equipment | |
CN113126117A (en) | Method for determining absolute scale of SFM map and electronic equipment | |
CN111398961B (en) | Method and apparatus for detecting obstacles | |
CN116012624B (en) | Positioning method, positioning device, electronic equipment, medium and automatic driving equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21870744 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21870744 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21870744 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023) |