WO2021103945A1 - 地图融合方法及装置、设备、存储介质 - Google Patents

地图融合方法及装置、设备、存储介质 Download PDF

Info

Publication number
WO2021103945A1
WO2021103945A1 PCT/CN2020/125837 CN2020125837W WO2021103945A1 WO 2021103945 A1 WO2021103945 A1 WO 2021103945A1 CN 2020125837 W CN2020125837 W CN 2020125837W WO 2021103945 A1 WO2021103945 A1 WO 2021103945A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
area
sampling point
fusion
maps
Prior art date
Application number
PCT/CN2020/125837
Other languages
English (en)
French (fr)
Inventor
彭冬炜
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20894727.5A priority Critical patent/EP4056952A4/en
Publication of WO2021103945A1 publication Critical patent/WO2021103945A1/zh
Priority to US17/824,371 priority patent/US20220282993A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • G01C21/3874Structures specially adapted for data searching and retrieval
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3664Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source

Definitions

  • the embodiments of the present application relate to electronic technology, and relate to but not limited to map fusion methods and devices, equipment, and storage media.
  • environmental maps can be established through visual information, and the need for map fusion is inevitably encountered in the process of constructing environmental maps.
  • map fusion is inevitably encountered in the process of constructing environmental maps.
  • map fusion is very large, resulting in poor real-time map fusion.
  • embodiments of the present application provide a map fusion method, device, device, and storage medium, which can improve the real-time performance of map fusion.
  • the technical solutions of the embodiments of the present application are implemented as follows:
  • an embodiment of the present application provides a map fusion method, the method includes: determining a search area of a first map from at least two currently displayed maps according to the acquired search guide information; wherein, the first map The map includes a plurality of first sampling points; a second map is determined from maps other than the first map in the at least two maps, and the corresponding area of the search area in the second map is determined Wherein, the second map includes a plurality of second sampling points; from the second sampling points in the corresponding area, a target point that matches the attribute information of the first sampling point in the search area is determined to A sampling point matching pair is obtained, the sampling point matching pair includes the target point and a first sampling point that matches the target point; according to the obtained sampling point matching pair, the first map and the second The map is fused to obtain the target fusion map.
  • the determining a second map from the at least two maps other than the first map, and determining the corresponding area of the search area in the second map includes: determining a corresponding area in the second map from maps other than the first map in the at least two maps according to the search guide information.
  • the search guidance information includes the first touch area corresponding to the touch operation
  • the determining the search area of the first map from at least two currently displayed maps according to the acquired search guidance information includes: Display the at least two maps in a map display area; receive a touch operation in the map display area; determine the first map from the at least two maps according to the first touch area corresponding to the touch operation; according to The location of the first touch area on the first map determines the search area.
  • the determining the search area according to the position of the first touch area on the first map includes: assigning the first touch area to an area corresponding to the first map , Determined as the search area.
  • the search guide information includes a second touch area corresponding to the touch operation, and the second map is determined from the at least two maps other than the first map, and the second map is determined
  • the corresponding area of the search area in the second map includes: determining, according to the second touch area corresponding to the touch operation, from the at least two maps other than the first map The second map; determining the corresponding area according to the position of the second touch area on the second map.
  • the search guide information is a voice instruction
  • the determining the search area of the first map from at least two currently displayed maps according to the acquired search guide information includes: displaying the search area in the map display area. At least two maps and labeling information, the labeling information is used to label different display sub-areas; receiving the voice instruction; according to the first display sub-areas marked by the labeling information in the voice instruction, from the at least two The first map is determined from one map; and the search area is determined according to the position of the first display sub-region on the first map.
  • the search guidance information is gesture information
  • the determining the search area of the first map from at least two currently displayed maps according to the acquired search guidance information includes: displaying the search area in the map display area. At least two maps; recognizing gesture information included in a gesture operation; determining a search area of a first map from the at least two maps according to the gesture information and the map display area.
  • the search guide information is eye feature information
  • the determination of the search area of the first map from at least two currently displayed maps according to the acquired search guide information includes: displaying in the map display area The at least two maps; acquiring the user’s eye feature information; according to the eye feature information, determining the gaze area of the user’s eye on the map display area; according to the gaze area, from the at least The first map is determined from two maps; the search area is determined according to the position of the gaze area on the first map.
  • the fusion of the first map and the second map according to the obtained sampling point matching pair to obtain a target fusion map includes: combining the first map and the second map Preliminary fusion is performed to obtain an initial fusion map; the attribute information of a matching pair of sampling points in the initial fusion map is fused into attribute information of a sampling point, thereby obtaining a target fusion map.
  • the preliminary fusion of the first map and the second map to obtain the initial fusion map includes: converting the local coordinates of the first sampling point in the first map to the global In the coordinate system, the initial global coordinates of the first sampling point are obtained; the local coordinates of the second sampling point in the second map are converted to the global coordinate system to obtain the initial global coordinates of the second sampling point. Global coordinates; combining the initial global coordinates of each of the first sampling points and the initial global coordinates of each of the second sampling points to obtain the initial fusion map.
  • the preliminary fusion of the first map and the second map to obtain an initial fusion map includes: determining a reference coordinate system of the first map relative to the reference of the second map The coordinate conversion relationship of the coordinate system; using the reference coordinate system of the second map as the global coordinate system, according to the coordinate conversion relationship, the coordinates of each first sampling point in the first map are converted into initial global coordinates ; Fuse the initial global coordinates of each of the first sampling points into the second map to obtain an initial fusion map.
  • the fusing the attribute information of the sampling point matching pair in the initial fusion map into the attribute information of one sampling point to obtain the target fusion map includes: matching the first sampling point in the sampling point pair.
  • the initial global coordinates of a sampling point are optimized to obtain the target global coordinates; the global coordinates of each target point in the initial fusion map and the target global coordinates of the matching first sampling point are respectively fused into a sampling point To get the target fusion map.
  • the optimizing the initial global coordinates of the first sampling point in the sampling point matching pair to obtain the target global coordinates includes: according to the initial global coordinates and phases of each of the first sampling points The initial global coordinates of the matched target point are determined to determine the reprojection error of the corresponding first sampling point; the initial global coordinates of each first sampling point in each matching pair of sampling points are adjusted iteratively until each Until the reprojection error of the first sampling point is less than or equal to a specific threshold, the global coordinate of the first sampling point input in the last iteration is determined as the target global coordinate.
  • an embodiment of the present application provides a map fusion device, including: a determining module, configured to determine the search area of the first map from at least two currently displayed maps according to the acquired search guide information;
  • the first map includes a plurality of first sampling points; from the at least two maps other than the first map, a second map is determined, and it is determined that the search area is in the second map
  • the second map includes a plurality of second sampling points;
  • the matching module is used to determine the attributes of the first sampling point in the search area from the second sampling points in the corresponding area Target points that match the information to obtain a sampling point matching pair, the sampling point matching pair including the target point and a first sampling point that matches the target point;
  • a fusion module for matching according to the obtained sampling point Yes, the first map and the second map are fused to obtain a target fusion map.
  • an embodiment of the present application provides an electronic device, including a memory and a processor, the memory stores a computer program that can be run on the processor, and when the processor executes the program, any of the embodiments of the present application is implemented. Steps in the map fusion method.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, it implements the steps in any of the map fusion methods described in the embodiments of the present application.
  • an embodiment of the present application provides a chip, including: a processor, configured to call and run a computer program from a memory, so that the device installed with the chip executes any one of the map fusion methods described in the embodiments of the present application A step of.
  • the search area of the first map is determined from at least two currently displayed maps according to the acquired search guide information; wherein, the first map includes a plurality of first sampling points; Among the two maps except the first map, a second map is determined, and the corresponding area of the search area in the second map is determined; wherein, the second map includes a plurality of second maps.
  • Sampling points based on this, from the second sampling points in a partial area (ie, the corresponding area) in the second map, search for a target point that matches the attribute information of the first sampling point in the search area ; In this way, the search range can be greatly reduced, thereby reducing the amount of calculation for finding a matching target point, thereby improving the real-time performance of map fusion.
  • FIG. 1 is a schematic diagram of the implementation process of a map fusion method according to an embodiment of this application;
  • Figure 2 is a schematic diagram of a map display area according to an embodiment of the application.
  • FIG. 3 is a schematic diagram of another map display area according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of another map display area according to an embodiment of the application.
  • FIG. 5 is a schematic diagram of the implementation process of another map fusion method according to an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of a map fusion device according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the application.
  • first ⁇ second ⁇ third involved in the embodiments of this application is to distinguish similar or different objects, and does not represent a specific order of objects. Understandably, “first ⁇ second ⁇ third” “Three” may be interchanged in specific order or sequence when permitted, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • the embodiment of the application provides a map fusion method, which can be applied to electronic devices, which can be devices with information processing capabilities such as mobile phones, tablet computers, notebook computers, desktop computers, robots, and drones.
  • the functions implemented by the map fusion method can be implemented by the processor in the electronic device calling program code.
  • the program code can be stored in a computer storage medium. It can be seen that the electronic device includes at least a processor and a storage medium.
  • FIG. 1 is a schematic diagram of the implementation process of the map fusion method according to the embodiment of the application. As shown in FIG. 1, the method at least includes the following steps 101 to 105:
  • Step 101 Determine a search area of a first map from at least two currently displayed maps according to the acquired search guide information; wherein, the first map includes a plurality of first sampling points.
  • the search guide information is used at least to guide the electronic device to determine the first map to be fused from the multiple maps currently displayed, and to determine the search area of the first map. In some embodiments, the search guide information is also used to guide the electronic device to determine the second map to be fused from the remaining maps, and determine the area in the second map corresponding to the search area.
  • the search guide information is determined by the electronic device according to an instruction input by the user.
  • the user inputs instructions such as touch operation, voice instruction, gesture operation, or eye operation in the electronic device, and the electronic device obtains corresponding search guidance information based on the received input instruction.
  • the search guide information can be diverse.
  • the search guide information may be one of the following information: the first touch area and/or the second touch area corresponding to the touch operation, voice information, gesture information, and eye feature information.
  • the manner in which the electronic device obtains the at least two maps is not limited.
  • the electronic device can collect image data in different physical spaces to generate a map corresponding to the physical space; the electronic device can also download the generated map from other electronic devices.
  • the at least two maps are map 1, map 2, and map 3.
  • map 1 is three-dimensional point cloud data generated by the electronic device by collecting image data in physical space 1
  • the map 2 is the three-dimensional point cloud data generated by the electronic device 2 by collecting image data in the physical space 2
  • the map 3 is the three-dimensional point cloud data generated by the electronic device 3 by collecting image data in the physical space 3.
  • map Equipment used to generate the map The physical space corresponding to the map Map 1 Said electronic equipment Physical space 1 Map 2 Electronic equipment 2 Physical space 2 Map 3 Electronic equipment 3 Physical space 3
  • the electronic device can obtain map 2 and map 3 from electronic devices 2 and 3, respectively, and determine the first map to be fused from maps 1 to 3 according to the obtained search guide information, and the The search area of the first map.
  • Step 102 Determine a second map from the at least two maps other than the first map.
  • the electronic device may automatically default the remaining maps to the second map, and the electronic device merges the two maps.
  • the electronic device can automatically determine the second map, and can also determine the second map according to the search guide information; wherein, the method of automatically determining the second map For example, determine the matching pairs of sampling points between each of the other maps except the first map and the search area of the first map, and then determine the other map with the largest number of matching pairs of sampling points as the second map .
  • the at least two maps are Map 1, Map 2, and Map 3, where Map 1 is the first map. Assuming that the search area of the first map contains 1000 first sampling points, and Map 2 contains 10 5 Sampling points, map 3 includes 10 7 sampling points; then, it is necessary to determine the Euclidean distance between the i-th first sampling point among the 1000 first sampling points and each sampling point in map 2, and based on For each Euclidean distance, determine the sampling point that matches the i-th first sampling point.
  • the method of determining the second map based on the search guide information is relatively simple. For example, when the search guide information includes the first touch area corresponding to the touch operation, the electronic device determines the search area according to the position of the first touch area on the first map.
  • Step 103 Determine a corresponding area of the search area in the second map; wherein, the second map includes a plurality of second sampling points.
  • the corresponding area is an area in the second map that is similar to or the same as the search area in the first map.
  • the electronic device may determine the corresponding area according to the sampling point matching pair set between the search area and the second map. For example, the sampling point in the second map that matches the first sampling point in the search area is determined as the sampling point in the corresponding area.
  • the electronic device may determine the corresponding area according to the search guidance information. For example, when the search guidance information includes the second touch area corresponding to the touch operation, the electronic device determines the corresponding area according to the position of the second touch area on the second map.
  • Step 104 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, and the sampling point matching pair includes The target point and a first sampling point matching the target point.
  • the matched target point is the second sampling point that has the same or similar attribute information as the first sampling point.
  • the target point is a point where the Euclidean distance between the second sampling point of the corresponding area and the attribute information of the first sampling point is less than a certain threshold, or the point where the Euclidean distance is the smallest.
  • the attribute information includes at least coordinate information; in some embodiments, the attribute information may also include image feature information.
  • Step 105 Perform fusion on the first map and the second map according to the obtained matching pair of sampling points to obtain a target fusion map.
  • the fusion between the two maps is actually to merge the matching sampling points between the two maps (that is, the two sampling points with the same or similar attributes) into one point, and the other points are unified into Simple merge after the same coordinate system.
  • the first sampling point and the target point matching the first sampling point are called a sampling point matching pair.
  • the target fusion map obtained includes: the attribute information of each matching pair fused into a point, and the points A101 to A10001 and B101 to B100001 after being unified into the same coordinate system Property information.
  • the attribute information of the target point with the attribute information of the matched first sampling point.
  • the attribute information including three-dimensional coordinates as an example, after transforming the three-dimensional coordinates of two matching points into the same coordinate system, the mean or median of the coordinates is taken as the coordinates of the merged point.
  • the corresponding coordinates are A(x a ,y a ,z a ) and B(x b ,y b ,z b ) , After fusing into a point, the coordinates of the point are
  • the search area of the first map is first determined according to the obtained search guide information; then, the search area of the first map is determined from the partial area of the first map (that is, the search Area) and a partial area of the second map (that is, the corresponding area) to search for matching sampling points between the two; this way:
  • the processor runs on devices such as servers or desktop computers, and can also be implemented on mobile devices such as mobile phones or tablets with small processors;
  • map fusion can not only be applicable to AR scenarios where multiple people build maps, but also can realize map construction and map fusion in areas where Internet access is prohibited or privacy areas;
  • map fusion process can be performed not only during offline map creation, but also during online positioning;
  • the electronic device can obtain the search guide information input by the user, indicating that there are two maps that can be merged among the at least two currently displayed maps, so the fusion process will not fail. If the search guide information is not introduced, that is, the electronic device automatically determines two maps with overlapping areas from the at least two maps, regardless of whether there are two maps that can be merged in the displayed multiple maps, the electronic device needs to After an automatic search process; in this way, in the absence of two maps that can be merged, unnecessary computational overhead will be caused.
  • a second map is determined, and it is determined that the search area is in the For the corresponding area in the second map, the electronic device may determine the corresponding area in the second map from maps other than the first map among the at least two maps according to the search guide information.
  • two maps with overlapping areas that is, the first map and the second map
  • the process of merging overlapping areas of maps is very complicated, especially in the process of determining two maps that can be merged. This is because the electronic device first needs to match the map pairwise. In the matching process, it is necessary to calculate the Euclidean distance between each sampling point in the map and each sampling point in the other map, which is expensive.
  • the electronic device automatically determines two maps with overlapping areas.
  • the electronic device directly determines the second map to be fused from the at least two maps other than the first map from the search guidance information, and further determines Out the corresponding area on the second map.
  • the amount of calculation for the electronic device to find the second map to be fused can be greatly reduced, and the amount of calculation for determining the matching pair of sampling points can be reduced; and the electronic device can obtain the search guidance information input by the user, indicating that at least two currently displayed There are two maps that can be fused, so the fusion process will not fail and will not experience unnecessary computational overhead.
  • the at least two maps are the map A, the map B, the map C, and the map D described in the foregoing example.
  • the search guidance information it can be directly determined that the first map and the second map to be fused are map B and map D, respectively, and the search area on map B and the corresponding area on map D can also be determined.
  • the electronic device Assuming that the number of sampling points in the search area on map B is 10 3 , and the number of sampling points in the corresponding area on map D is 1.5 ⁇ 10 3 , then the electronic device only needs to calculate (10 3 ⁇ 1.5 ⁇ 10 3 ) times Euclidean distance is sufficient. It can be seen that compared with the above method of automatically determining two maps with overlapping regions, the calculation amount is very small; in this way, the calculation cost of map fusion can be greatly reduced, thereby effectively improving the real-time performance of map fusion.
  • the search guide information may be the first touch area acquired based on the touch operation input by the user in the electronic device; based on this, an embodiment of the present application provides a map fusion method.
  • the method may include the following steps 201 to 208:
  • Step 201 Display at least two maps in the map display area.
  • the at least two maps may be generated by the same electronic device or different electronic devices. For example, multiple electronic devices construct maps in different physical spaces in parallel. At least two maps are displayed in the map display area to enable the user to point out which areas of the two maps overlap or are similar.
  • Step 202 Receive a touch operation in the map display area.
  • the type of touch operation is not limited, and the type of touch operation may be various.
  • the touch operation can be single-finger touch, multi-finger touch, single-finger swipe, two-finger pinch, two-finger swipe apart, etc.; the user can perform the above-mentioned touch operations in the display area of a certain map to inform the electronic device of the map Which area is the search area of?
  • the user can also pinch two fingers or touch multiple fingers on the map display area to inform the electronic device which two maps are to be merged.
  • Step 203 Determine the first map from the at least two maps according to the first touch area corresponding to the touch operation;
  • Step 204 Determine the search area according to the position of the first touch area on the first map.
  • the map 21, the map 22, and the map 23 are displayed on the map display area 20 of the electronic device; when the user touches the areas 211 and 212 of the map 21 with two fingers, the electronic device can determine the reception according to the touch area.
  • the map for the touch operation is the map 21, so the map 21 is determined as the first map to be fused; and the two touched areas are determined to be the search areas of the first map.
  • Step 205 Determine a second map from maps other than the first map among the at least two maps;
  • Step 206 Determine a corresponding area of the search area in the second map; wherein, the second map includes a plurality of second sampling points.
  • the electronic device can automatically determine the second map and the corresponding area in the second map, and can also determine the second map according to the search guide information, and then determine the corresponding area in the second map.
  • the search guidance information may also include a second touch area corresponding to a touch operation.
  • the electronic device implements step 205 and step 206, it may remove from the at least two maps according to the second touch area corresponding to the touch operation. In a map outside the first map, the second map is determined; and the corresponding area is determined according to the position of the second touch area on the second map.
  • the map 31, the map 32, and the map 33 are displayed in the map display area 30 of the electronic device; when the user pinches the area 311 of the map 31 and the area 331 of the map 33 with two fingers, the electronic device can determine to receive the touch according to the touch area.
  • the operated maps are the map 31 and the map 33, so these two maps are respectively determined as the first map and the second map to be fused, and the area 311 is determined as the search area according to the position of the first touch area on the map 31, According to the position of the second touch area on the map 33, the area 331 is determined to be the corresponding area.
  • Step 207 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, and the sampling point matching pair includes The target point and a first sampling point matching the target point.
  • the electronic device determines the similarity between the attribute information of the i-th first sampling point in the search area and the attribute information of each second sampling point in the corresponding area to obtain a similarity set , I is an integer greater than 0; the second sampling point whose similarity satisfies a specific condition in the similarity set is determined as the target point.
  • the search for matching pairs of sampling points is implemented through step 207.
  • the advantage is that: on the one hand, the similarity between the sampling point in the search area of the first map and each sampling point in the corresponding area can be calculated without traversing Each sampling point in the first map, thereby reducing the amount of calculation to find a matching pair of sampling points; on the other hand, it is enough to determine the degree of similarity with each sampling point in the corresponding area of the second map without traversing For each sampling point in the second map, the amount of calculation for finding a matching pair of sampling points is further reduced.
  • step 208 the first map and the second map are fused according to the obtained matching pair of sampling points to obtain a target fusion map.
  • a method for obtaining search guidance information is provided, that is, based on the touch operation input by the user on the electronic device, the corresponding first touch area is obtained, so as to determine the first map and the first map to be merged Compared with other acquisition methods, this method is simple to implement. After detecting the touch area corresponding to the touch operation, the first map and the search area on the first map can be determined, so the map can be further reduced The processing complexity of fusion, thereby improving the processing efficiency of map fusion.
  • the search guide information may be a voice command input by the user.
  • an embodiment of the present application provides a map fusion method. The method may include the following steps 301 to 308:
  • Step 301 Display the at least two maps and label information in a map display area, where the label information is used to label different display sub-areas.
  • the electronic device may divide the map display area into multiple grids in advance, and each grid is a display sub-area.
  • the label information may be coordinate information of a key sampling point on the map area covered by the grid, and each grid corresponds to at least one piece of label information.
  • the labeling information can also be a custom grid identifier, and each grid identifier has a corresponding relationship with the map area covered by the corresponding grid.
  • the grid identifier is the grid identifier in the map display area. location information.
  • Step 302 Receive the voice instruction
  • Step 303 Determine the first map from the at least two maps according to the first display subregion marked by the marking information in the voice instruction.
  • the voice command may carry one or more different annotation information.
  • Step 304 Determine the search area according to the position of the first display sub-area on the first map
  • Step 305 Determine a second map from maps other than the first map among the at least two maps;
  • Step 306 Determine a corresponding area of the search area in the second map; wherein, the second map includes a plurality of second sampling points.
  • the electronic device can automatically determine the second map and the corresponding area of the search area in the second map, and can also determine the second map and the corresponding area on the second map according to multiple annotation information carried by the voice command. Corresponding area.
  • the electronic device may determine the second map from the other maps according to the second display sub-region marked by the annotation information in the voice instruction, and display the second map in the second map according to the second display sub-region. On the location to determine the corresponding area.
  • the user can input a voice command to direct the electronic device to perform map fusion according to the corresponding relationship between the displayed at least two maps and the labeled information.
  • map 41, map 42 and map 43 are displayed in the map display area, and the horizontal axis and vertical axis of the grid are displayed.
  • the user can indicate according to the coordinate values on the horizontal axis and vertical axis. Which areas on the map are similar areas.
  • the voice command is "grid (6, 2), grid (14, 2), and grid (15, 2) are overlapping areas”.
  • the electronic device recognizes the label information carried in the voice command , According to the grid area marked by the grid (6, 2), the corresponding map 41 is determined to be the first map to be fused, and the search area is determined according to the position of the grid (6, 2) on the map 41; electronic equipment According to the grid area marked by the grid (14, 2) and the grid (15, 2), it is determined that the corresponding map 43 is the second map to be fused, and according to the grid (14, 2) and the grid (15, 2) , 2) The location on the map 43, and the corresponding area is determined.
  • Step 307 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, the sampling point matching pair including The target point and a first sampling point matching the target point;
  • step 308 the first map and the second map are fused according to the obtained matching pair of sampling points to obtain a target fusion map.
  • another method of obtaining search guidance information is provided, that is, based on the annotation information carried in the voice instruction input by the user, the first map and the search area of the first map are determined; this method is compared with other methods.
  • the acquisition method can guide the electronic device to perform map fusion without the user's touch operation; in this way, the map fusion method can be applied to electronic devices without touch screens or electronic devices with faulty touch screens, thereby expanding the application of map fusion
  • the scene reduces the implementation cost of map fusion.
  • the search guidance information may be gesture information determined based on the user's gesture operation.
  • an embodiment of the present application provides a map fusion method, and the method may include the following steps 401 to 407:
  • Step 401 Display the at least two maps in a map display area
  • Step 402 Recognize the gesture information included in the gesture operation
  • the user can point two fingers to different areas of the two maps respectively, thereby indicating the search area of the first map and the corresponding area of the second map to be fused;
  • Step 403 Determine a search area of the first map from the at least two maps according to the gesture information and the map display area.
  • the electronic device may recognize the gesture pointing of the gesture operation on the map display area; determine the first map from the at least two maps according to the gesture pointing; correspondingly according to the gesture pointing The location on the first map determines the search area.
  • Step 404 Determine a second map from the maps other than the first map among the at least two maps;
  • Step 405 Determine a corresponding area of the search area in the second map; wherein the second map includes a plurality of second sampling points.
  • the electronic device can automatically determine the second map and the corresponding area in the second map, and can also determine the corresponding area on the second map according to the gesture information. For example, the electronic device recognizes that the gesture information includes the first finger pointing and the second finger pointing, and the search area and the corresponding area are determined according to the display areas corresponding to the two points.
  • Step 406 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, and the sampling point matching pair includes The target point and a first sampling point matching the target point;
  • step 407 the first map and the second map are fused according to the obtained matching pair of sampling points to obtain a target fusion map.
  • another method for obtaining search guidance information is provided, that is, obtaining gesture information based on a gesture operation input by a user; and determining the first map and the search area of the first map according to the gesture information; this Compared with other acquisition methods, the electronic device can be guided to perform map fusion without the user's touch operation or the input of voice instructions; on the one hand, the map fusion method can be applied to devices that do not have a touch screen and a voice recognition device.
  • the map fusion method can also be applied to electronic equipment with faulty touch screens and/or voice recognition devices, so that electronic Even if the touch screen and/or the voice recognition device fails, the device can continue map construction and map fusion, thereby improving the reliability of map fusion.
  • the search guidance information may be the eye feature information obtained by the electronic device when the user is gazing at the map display area.
  • an embodiment of the present application provides a map fusion method. The method may include the following steps 501 to step 509:
  • Step 501 Display the at least two maps in a map display area
  • Step 502 Obtain the user's eye feature information.
  • the eye feature information may include the viewing direction and the length of time the user's eyeball stops rotating in the viewing direction.
  • Step 503 Determine the gaze area of the user's eyeball on the map display area according to the eye feature information.
  • the electronic device may use a gaze tracker to detect the rotation and viewing direction of the user's eyeballs; and determine whether the length of time the user's eyeballs stop rotating in the viewing direction exceeds a certain threshold; if so, the viewing direction is The sub-area corresponding to the map display area is determined as the gaze area.
  • Step 504 Determine the first map from the at least two maps according to the gaze area
  • Step 505 Determine the search area according to the position of the gaze area on the first map
  • Step 506 Determine a second map from the maps other than the first map among the at least two maps;
  • Step 507 Determine a corresponding area of the search area in the second map; wherein, the second map includes a plurality of second sampling points.
  • the electronic device can automatically determine the second map and the corresponding area on the second map, and can also determine whether the user's eyeball is in the other observation direction and the other observation direction contained in the eye feature information.
  • the duration of stopping the rotation determines the second map and the corresponding area on the second map.
  • Step 508 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, and the sampling point matching pair includes The target point and a first sampling point matching the target point;
  • step 509 the first map and the second map are fused according to the obtained matching pair of sampling points to obtain a target fusion map.
  • search guidance information that is, to determine the first map and the search area of the first map based on the user's eye feature information; this method is compared with other acquisition methods, There is no need for the user to perform touch operations, or input voice commands or gesture operations to guide electronic devices to perform map fusion; in this way, the map fusion method can be applied to electronic devices with gaze trackers, thereby expanding the application of map fusion Scenes.
  • the electronic device may perform map fusion according to any of the above-mentioned methods of acquiring search guidance information.
  • the user can choose which input method to use to guide the electronic device for map fusion. For example, the user chooses to guide the electronic device to determine the search area on the first map and the corresponding area on the second map through touch operations, voice commands, gesture operations, or eye operations.
  • the embodiment of the present application further provides a map fusion method, and the method may include the following steps 601 to 606:
  • Step 601 Determine a search area of a first map from at least two currently displayed maps according to the acquired search guide information; wherein, the first map includes a plurality of first sampling points;
  • Step 602 Determine a second map from the maps other than the first map among the at least two maps;
  • Step 603 Determine a corresponding area of the search area in the second map; wherein, the second map includes a plurality of second sampling points;
  • Step 604 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, and the sampling point matching pair includes The target point and a first sampling point matching the target point;
  • Step 605 Perform preliminary fusion of the first map and the second map to obtain an initial fusion map.
  • the so-called preliminary fusion for example, step 705 to step 707 in the following embodiment, generally is to unify the coordinates of the sampling points in the first map and the coordinates of the sampling points in the second map into one coordinate system, and then perform simple The merge, get the union, that is, the initial fusion map.
  • step 606 the attribute information of the matching pair of sampling points in the initial fusion map is fused into attribute information of one sampling point, so as to obtain a target fusion map.
  • the coordinates in the attribute information of the target point in the initial fusion map are converted coordinates, and the coordinates in the attribute information of the first sampling point are also converted coordinates.
  • the embodiment of the present application further provides a map fusion method, and the method may include the following steps 701 to 709:
  • Step 701 Determine a search area of a first map from at least two currently displayed maps according to the acquired search guide information; wherein, the first map includes a plurality of first sampling points;
  • Step 702 Determine a second map from the maps other than the first map among the at least two maps;
  • Step 703 Determine a corresponding area of the search area in the second map; wherein, the second map includes a plurality of second sampling points;
  • Step 704 From the second sampling points in the corresponding area, determine a target point that matches the attribute information of the first sampling point in the search area to obtain a sampling point matching pair, and the sampling point matching pair includes The target point and a first sampling point matching the target point;
  • Step 705 Convert the local coordinates of the first sampling point in the first map to a global coordinate system to obtain the initial global coordinates of the first sampling point;
  • Step 706 Convert the local coordinates of the second sampling point in the second map to the global coordinate system to obtain the initial global coordinates of the second sampling point;
  • Step 707 Combine the initial global coordinates of each of the first sampling points and the initial global coordinates of each of the second sampling points to obtain the initial fusion map.
  • the electronic device may also obtain the initial fusion map in the following manner, that is, determine the coordinate conversion relationship between the reference coordinate system of the first map and the reference coordinate system of the second map;
  • the reference coordinate system of the second map is a global coordinate system, and the coordinates of each first sampling point in the first map are converted into initial global coordinates according to the coordinate conversion relationship; each first sampling is The initial global coordinates of the points are added to the second map to obtain an initial fusion map.
  • Step 708 optimizing the initial global coordinates of the first sampling point in the sampling point matching pair to obtain the target global coordinates
  • the reprojection error of the corresponding first sampling point is determined according to the initial global coordinates of each of the first sampling points and the initial global coordinates of the matched target points; each of the sampling points is adjusted iteratively Match the initial global coordinates of each of the first sampling points in the pair until the reprojection error of each of the first sampling points is less than or equal to a certain threshold, and then input the global coordinates of the first sampling point in the last iteration, Determined as the global coordinates of the target.
  • the electronic device can use the Levenberg-Marquardtal algorithm (LM) algorithm to iterate to determine the optimal target global coordinates.
  • LM Levenberg-Marquardtal algorithm
  • step 709 the global coordinates of each target point in the initial fusion map and the target global coordinates of the matched first sampling point are respectively fused into the global coordinates of a sampling point, thereby obtaining a target fusion map.
  • sampling point 1 and sampling point 2 are sampling point matching pairs, and the electronic device may fuse the global coordinates of these two points into the global coordinates of one sampling point (referred to as sampling point 12);
  • Sampling point 3 and sampling point 4 are sampling point matching pairs, and the electronic device can fuse the global coordinates of these two points into the global coordinates of a sampling point (called sampling point 34);
  • sampling point 5 and sampling point 6 are sampling points
  • the electronic device can fuse the global coordinates of these two points into the global coordinates of a sampling point (called sampling point 56); and then combine the global coordinates of sampling point 12, the global coordinates of sampling point 34, and the global coordinates of sampling point 56.
  • the global coordinates are added to the initial fusion map, and the sampling points 1 to 6 in the initial fusion fusion map are eliminated to obtain the target fusion map.
  • the initial global coordinates of the first sampling point that matches the target point need to be optimized to obtain the target global coordinates;
  • the global coordinates and the target global coordinates of the matching first sampling point are respectively fused into the global coordinates of a sampling point; in this way, the reprojection error of the first sampling point can be reduced, thereby reducing the staggering that occurs at the fusion after map fusion Offset.
  • observation data of the surrounding environment such as captured images, point cloud data
  • sensors to track and match the collected observation data; triangulate according to the relationship between the observation data, generate a spatial three-dimensional point cloud, and build increments Style map.
  • map fusion is to extract the key frames of multiple maps, match the key frames, associate the two maps that exceed the threshold of the number of matched frames, and merge them into one map.
  • the embodiment of the present application provides a solution for multi-map fusion, which can simultaneously create maps on multiple mobile terminals, and can perform map fusion on any mobile terminal that has successfully created maps. Moreover, the error of map fusion is reduced through user guidance and map optimization.
  • the specific implementation scheme is shown in Fig. 5, which may include the following steps S1 to S5:
  • Step S1 Use real-time positioning and map construction (Simultaneous Localization And Mapping, SLAM) technology to create maps.
  • SLAM Simultaneous Localization And Mapping
  • the SLAM module extracts ORB (Oriented FAST and Rotated Brief) features from the video frame sequence obtained by the camera, and then performs feature matching, tracking, and triangulation to generate a three-dimensional point cloud.
  • ORB Oriented FAST and Rotated Brief
  • the mobile terminal selects some frames from the video frame sequence as key frames, and these key frames are snapshots of real scenes in different poses.
  • the key frames contain the observation relationship between the pose information and the map point cloud. These key frames constitute the vertices of the pose map, and the connection between them constitutes the edge of the pose map. The two key frames share the same view of the map point. The number is the weight of this edge.
  • the ORB feature combines the detection method of FAST feature points with the BRIEF feature descriptor, and makes improvements and optimizations on their original basis.
  • the ORB algorithm proposes to use the moment method to determine the direction of FAST feature points. That is to say, the centroid of the characteristic point within the radius of r is calculated by the moment, and a vector is formed from the coordinate of the characteristic point to the centroid as the direction of the characteristic point.
  • the matching process is more sensitive to motion blur and due to camera rotation, so the user's motion state is more stringent during the initialization process.
  • the pose of the previous image is used to guide the data association process. It can help extract visible sub-images from the current map, thereby reducing the computational overhead of blindly projecting the entire map; in addition, it can also provide a priori for the current image pose In this way, the feature matching is only searched in a small area, rather than searching the entire image. Then establish the matching relationship between the local map points and the current feature points.
  • Triangulation The purpose of triangulation is to solve the three-dimensional coordinates of the spatial point corresponding to the image frame. Triangulation was first proposed by Gauss and used in surveying. To put it simply: Observe the same three-dimensional point P(x,y,z) at different positions, and know the two-dimensional projection points X1(x1,y1), X2(x2, y2), using the triangle relationship to recover the depth information z of the three-dimensional point.
  • Step S2 Map upload and download
  • Each device has a device ID, and multiple mobile devices can be connected through the device ID.
  • the user can trigger the download of the map of a specific device, or upload the map of the machine to any device.
  • Step S3 User guides to set the map fusion location
  • the current mobile terminal of user A generates map-A through mapping, and after step S2, the map map-B generated by the mobile terminal of user B is downloaded. Map-A and map-B can be visualized on the mobile terminal. User A can anchor map-A and map-B together by clicking on the junction of the map to form a "rough" fusion model.
  • Step S4 Map fusion
  • the two maps After transforming the coordinate transformation parameters of the two maps, the two maps can be unified under the same coordinate system. Next, the process of fusing the two maps is described:
  • the key to map fusion is to associate the key frames in the new (old) map with the map points in the old (new) map.
  • the matching points between the two maps have been obtained by searching and matching, and the map points of the map are used to replace the map points of the new map.
  • the key frames associated with the new map points of these matching points in the original new map will be converted into The old map points in the matching points are associated, so as to achieve the purpose of map fusion.
  • the subsequent tracking and construction will simultaneously use the key frames and map point information of the two maps for optimization.
  • Step S5 Bundle Adjustment (BA) optimization
  • BA is essentially an optimization model, whose purpose is to minimize the reprojection error, and the purpose of optimization with BA is to reduce the staggered offset of the fusion after map fusion.
  • the BA optimization mainly uses the LM algorithm, and on this basis, uses the sparse nature of the BA model to perform calculations.
  • the LM algorithm is a combination of the steepest descent method (gradient descent method) and Gauss-Newton.
  • the gradient descent method is to iterate along the negative direction of the gradient to find the variable value that minimizes the function.
  • the embodiment of the present application provides a solution for multi-map fusion, which can simultaneously create maps on multiple mobile terminals, and can also perform map fusion on any mobile terminal that has successfully created maps.
  • This method improves the efficiency and real-time performance of map creation; in addition, the local BA method at the fusion is introduced to improve the accuracy of map fusion, and there will be no staggered offset in the map fusion.
  • the user's guidance information is introduced to support the user's manual setting of the initial position of the fusion, which greatly improves the success rate of the model fusion.
  • the embodiments of the present application provide a solution for multi-map fusion, which simultaneously solves the problems of map creation efficiency, real-time performance, and accuracy of map fusion.
  • Multiple mobile devices build maps at the same time to improve the efficiency and real-time of map creation; introduce the local BA method at the fusion to improve the accuracy of map fusion; introduce user guidance information, support users to manually set the fusion location, and improve the success of map fusion rate.
  • mapping and map fusion can also be performed on the cloud or edge computing server side; among them, 1. Cloud server or edge computing server needs to be built; 2. Real-time mapping and fusion; 3. Mobile data Interaction with cloud data.
  • the embodiment of the present application provides a map fusion device, which includes the included modules and can be implemented by a processor in an electronic device; of course, it can also be implemented by a specific logic circuit;
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • Fig. 6 is a schematic structural diagram of a map fusion device according to an embodiment of the application.
  • the map fusion device 600 includes a determination module 601, a matching module 602, and a fusion module 603, wherein:
  • the determining module 601 is configured to determine a search area of a first map from at least two currently displayed maps according to the acquired search guide information; wherein, the first map includes a plurality of first sampling points; Among the two maps except the first map, a second map is determined, and the corresponding area of the search area in the second map is determined; wherein, the second map includes a plurality of second maps. Sampling point;
  • the matching module 602 is configured to determine a target point that matches the attribute information of the first sampling point in the search area from the second sampling point in the corresponding area to obtain a sampling point matching pair, the sampling point
  • the matching pair includes the target point and a first sampling point that matches the target point;
  • the fusion module 603 is configured to merge the first map and the second map according to the obtained sampling point matching pair to obtain a target fusion map.
  • the determining module 601 is configured to: according to the search guide information, determine the corresponding area in the second map from the maps other than the first map in the at least two maps .
  • the search guidance information includes a first touch area corresponding to a touch operation
  • the determining module 601 is configured to: display the at least two maps in a map display area; receive a touch operation in the map display area; Determine the first map from the at least two maps according to the first touch area corresponding to the touch operation; determine the search area according to the position of the first touch area on the first map .
  • the search guide information includes a second touch area corresponding to a touch operation
  • the determining module 601 is configured to: according to the second touch area corresponding to the touch operation, divide all data from the at least two maps. In a map other than the first map, the second map is determined; and the corresponding area is determined according to the position of the second touch area on the second map.
  • the search guide information is a voice command
  • the determining module 601 is configured to: display the at least two maps and annotation information in a map display area, and the annotation information is used to mark different display sub-areas; Receive the voice instruction; determine the first map from the at least two maps according to the first display sub-region marked by the labeling information in the voice instruction; determine the first map from the at least two maps; The location on the first map is used to determine the search area.
  • the search guide information is gesture information
  • the determining module 601 is configured to: display the at least two maps in the map display area; recognize the gesture information included in the gesture operation; The map display area determines the search area of the first map from the at least two maps.
  • the search guide information is eye feature information
  • the determining module 601 is configured to: display the at least two maps in the map display area; obtain the user’s eye feature information; Information, determining the gaze area of the user's eyeballs on the map display area;
  • the first map is determined from the at least two maps according to the gaze area; and the search area is determined according to the position of the gaze area on the first map.
  • the fusion module 603 is configured to: perform a preliminary fusion of the first map and the second map to obtain an initial fusion map; and match the sampling points in the initial fusion map to the attribute information of the pair, The attribute information of a sampling point is fused to obtain the target fusion map.
  • the fusion module 603 is configured to: convert the local coordinates of the first sampling point in the first map to the global coordinate system to obtain the initial global coordinates of the first sampling point;
  • the local coordinates of the second sampling point in the second map are converted to the global coordinate system to obtain the initial global coordinates of the second sampling point;
  • the initial global coordinates of each of the first sampling points and each The initial global coordinates of the second sampling points are merged to obtain the initial fusion map.
  • the fusion module 603 is configured to: determine the coordinate conversion relationship of the reference coordinate system of the first map with respect to the reference coordinate system of the second map; take the reference coordinate system of the second map as In the global coordinate system, according to the coordinate conversion relationship, the coordinates of each first sampling point in the first map are converted into initial global coordinates; the initial global coordinates of each first sampling point are merged into the In the second map, the initial fusion map is obtained.
  • the fusion module 603 is configured to: optimize the initial global coordinates of the first sampling point in the sampling point matching pair to obtain the target global coordinates;
  • the global coordinates of the target point and the target global coordinates of the matched first sampling point are respectively fused into the global coordinates of a sampling point, thereby obtaining a target fusion map.
  • the fusion module 603 is configured to: determine the reprojection error of the corresponding first sampling point according to the initial global coordinates of each of the first sampling points and the initial global coordinates of the matching target points; Adjust the initial global coordinates of each first sampling point in each sampling point matching pair until the reprojection error of each first sampling point is less than or equal to a specific threshold, and the first input of the last iteration The global coordinate of a sampling point is determined as the target global coordinate.
  • the technical solutions of the embodiments of the present application essentially or the parts that contribute to related technologies can be embodied in the form of software products.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
  • FIG. 7 is a schematic diagram of a hardware entity of the electronic device according to an embodiment of the application.
  • the hardware entity of the electronic device 700 includes: a memory 701 and a processor. 702.
  • the memory 701 stores a computer program that can be run on the processor 702, and the processor 702 implements the steps in the map fusion method provided in the foregoing embodiment when the processor 702 executes the program.
  • the memory 701 is used to store instructions and applications executable by the processor 702, and can also cache data to be processed or processed by the processor 702 and each module in the electronic device 700 (for example, image data, audio data, voice communication data, and Video communication data) can be realized by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the map fusion method provided in the foregoing embodiments are implemented.
  • An embodiment of the present application provides a chip, including a processor, configured to call and run a computer program from a memory, so that a device installed with the chip executes the steps in any one of the map fusion methods described in the embodiments of the present application.
  • the disclosed device and method may be implemented in other ways.
  • the embodiments of the touch screen system described above are only illustrative.
  • the division of the modules is only a logical function division, and there may be other division methods in actual implementation, such as: multiple modules or components can be combined , Or can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or modules, and may be in electrical, mechanical or other forms. of.
  • modules described above as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules; they may be located in one place or distributed on multiple network units; Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the embodiments of the present application can be all integrated into one processing unit, or each module can be individually used as a unit, or two or more modules can be integrated into one unit; the above-mentioned integration
  • the module can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the aforementioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks, or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Navigation (AREA)
  • Instructional Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种地图融合方法及装置、设备、存储介质,该方法包括:根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,其中,第一地图包括多个第一采样点(101);从至少两个地图中除第一地图外的地图中,确定出第二地图(102);确定搜索区域在第二地图中的对应区域,其中,第二地图包括多个第二采样点(103);从对应区域的第二采样点中,确定出与搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,采样点匹配对包括目标点和与目标点相匹配的第一采样点(104);根据得到的采样点匹配对,对第一地图和第二地图进行融合,得到目标融合地图(105)。

Description

地图融合方法及装置、设备、存储介质
相关申请的交叉引用
本申请基于申请号为201911185884.1、申请日为2019年11月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本申请。
技术领域
本申请实施例涉及电子技术,涉及但不限于地图融合方法及装置、设备、存储介质。
背景技术
目前,通过视觉信息可以建立环境地图,在构建环境地图的过程中不可避免地遇到地图融合的需求。比如,在多人或多机器协作完成地图构建等应用场景中,需要将多处生成的局部地图融合为一个地图。然而,实现地图融合的计算量非常庞大,导致地图融合的实时性较差。
发明内容
有鉴于此,本申请实施例提供地图融合方法及装置、设备、存储介质,能够提高地图融合的实时性。本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供一种地图融合方法,所述方法包括:根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
在一些实施例中,所述从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域,包括:根据所述搜索引导信息,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图中的对应区域。
在一些实施例中,所述搜索引导信息包括触摸操作对应的第一触摸区域,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:在地图显示区域显示所述至少两个地图;在所述地图显示区域接收触摸操作;根据所述触摸操作对应的第一触摸区域,从所述至少两个地图中确定出所述第一地图;根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域。
在一些实施例中,所述根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域,包括:将所述第一触摸区域在所述第一地图上对应的区域,确定为所述搜索区域。
在一些实施例中,所述搜索引导信息包括触摸操作对应的第二触摸区域,所述从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域,包括:根据所述触摸操作对应的第二触摸区域,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图;根据所述第二触摸区域在所述第二地 图上的位置,确定所述对应区域。
在一些实施例中,所述搜索引导信息为语音指令,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:在地图显示区域显示所述至少两个地图和标注信息,所述标注信息用于标注不同的显示子区域;接收所述语音指令;根据所述语音指令中的标注信息所标注的第一显示子区域,从所述至少两个地图中确定出所述第一地图;根据所述第一显示子区域在所述第一地图上的位置,确定所述搜索区域。
在一些实施例中,所述搜索引导信息为手势信息,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:在地图显示区域显示所述至少两个地图;识别手势操作包含的手势信息;根据所述手势信息和所述地图显示区域,从所述至少两个地图中确定第一地图的搜索区域。
在一些实施例中,所述搜索引导信息为眼部特征信息,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:在地图显示区域显示所述至少两个地图;获取用户的眼部特征信息;根据所述眼部特征信息,确定所述用户的眼球在所述地图显示区域上的注视区域;根据所述注视区域,从所述至少两个地图中确定出所述第一地图;根据所述注视区域在所述第一地图上的位置,确定所述搜索区域。
在一些实施例中,所述根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图,包括:将所述第一地图与所述第二地图进行初步融合,得到初始融合地图;将所述初始融合地图中的采样点匹配对的属性信息,融合为一个采样点的属性信息,从而得到目标融合地图。
在一些实施例中,所述将所述第一地图与所述第二地图进行初步融合,得到初始融合地图,包括:将所述第一地图中的第一采样点的局部坐标,转换至全局坐标系下,得到所述第一采样点的初始全局坐标;将所述第二地图中的第二采样点的局部坐标,转换至所述全局坐标系下,得到所述第二采样点的初始全局坐标;将每一所述第一采样点的初始全局坐标和每一所述第二采样点的初始全局坐标进行合并,得到所述初始融合地图。
在一些实施例中,所述将所述第一地图与所述第二地图进行初步融合,得到初始融合地图,包括:确定所述第一地图的参考坐标系相对于所述第二地图的参考坐标系的坐标转换关系;以所述第二地图的参考坐标系为全局坐标系,根据所述坐标转换关系,将所述第一地图中的每一第一采样点的坐标转换为初始全局坐标;将每一所述第一采样点的初始全局坐标融合到所述第二地图中,得到初始融合地图。
在一些实施例中,所述将所述初始融合地图中的采样点匹配对的属性信息融合为一个采样点的属性信息,从而得到目标融合地图,包括:对所述采样点匹配对中的第一采样点的初始全局坐标进行优化,得到目标全局坐标;将所述初始融合地图中的每一所述目标点的全局坐标与相匹配的第一采样点的目标全局坐标分别融合为一个采样点的全局坐标,从而得到目标融合地图。
在一些实施例中,所述对所述采样点匹配对中的第一采样点的初始全局坐标进行优化,得到目标全局坐标,包括:根据每一所述第一采样点的初始全局坐标与相匹配的目标点的初始全局坐标,确定对应的第一采样点的重投影误差;迭代调整每一所述采样点匹配对中每一所述第一采样点的初始全局坐标,直到每一所述第一采样点的重投影误差小于或等于特定阈值为止,将最后一次迭代输入的第一采样点的全局坐标,确定为所述目标全局坐标。
第二方面,本申请实施例提供一种地图融合装置,包括:确定模块,用于:根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;匹配模块,用于从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;融合模块,用于根据得到的采样点匹配对,对所述第 一地图和所述第二地图进行融合,得到目标融合地图。
第三方面,本申请实施例提供一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例任一所述地图融合方法中的步骤。
第四方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本申请实施例任一所述地图融合方法中的步骤。
第五方面,本申请实施例提供一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行本申请实施例任一所述地图融合方法中的步骤。
在本申请实施例中,根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;基于此,从所述第二地图中的部分区域(即所述对应区域)的第二采样点中,搜索出与所述搜索区域中第一采样点的属性信息相匹配的目标点;如此,可以大大缩小搜索范围,从而降低寻找匹配的目标点的计算量,进而提高地图融合的实时性。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并于说明书一起用于说明本申请的技术方案。
图1为本申请实施例地图融合方法的实现流程示意图;
图2为本申请实施例地图显示区域示意图;
图3为本申请实施例另一地图显示区域示意图;
图4为本申请实施例再一地图显示区域示意图;
图5为本申请实施例另一地图融合方法的实现流程示意图;
图6为本申请实施例地图融合装置的结构示意图;
图7为本申请实施例电子设备的一种硬件实体示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
需要指出,本申请实施例所涉及的术语“第一\第二\第三”是为了区别类似或不同的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
本申请实施例提供一种地图融合方法,所述方法可以应用于电子设备,所述电子设备可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机等具有信息处理能力的设备。所述地图融合方法所实现的功能可以通过所述电子设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,所述电子设备至少包括处理器和存 储介质。
图1为本申请实施例地图融合方法的实现流程示意图,如图1所示,所述方法至少包括以下步骤101至步骤105:
步骤101,根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点。
在一些实施例中,搜索引导信息至少用于引导电子设备从当前显示的多个地图中确定待融合的第一地图,并确定第一地图的搜索区域。在一些实施例中,搜索引导信息还用于引导电子设备从剩余地图中确定待融合的第二地图,并确定第二地图中与搜索区域对应的区域。
在一些实施例中,搜索引导信息是电子设备根据用户输入的指令而确定的。例如,用户在电子设备中输入触摸操作、语音指令、手势操作或者眼部操作等指令,电子设备基于接收的输入指令,获得相应的搜索引导信息。也就是说,搜索引导信息可以是多种多样的。例如,搜索引导信息可以为以下信息之一:触摸操作对应的第一触摸区域和/或第二触摸区域、语音信息、手势信息、眼部特征信息。
需要说明的是,对于电子设备获得所述至少两个地图的方式不做限定。电子设备可以在不同的物理空间采集图像数据,从而生成与所述物理空间对应的地图;电子设备还可以从其他电子设备中下载已经生成的地图。举例来说,所述至少两个地图为地图1、地图2和地图3,如表1所示,地图1为所述电子设备通过采集物理空间1中的图像数据而生成的三维点云数据;地图2为电子设备2通过采集物理空间2中的图像数据而生成的三维点云数据;地图3为电子设备3通过采集物理空间3中的图像数据而生成的三维点云数据。其中,至少有两个物理空间之间存在重叠区域。
表1
地图 用于生成地图的设备 地图对应的物理空间
地图1 所述电子设备 物理空间1
地图2 电子设备2 物理空间2
地图3 电子设备3 物理空间3
基于此,所述电子设备可以从电子设备2和3处分别获取地图2和地图3,并根据获取的搜索引导信息,从地图1至3中确定待融合的所述第一地图,以及所述第一地图的搜索区域。
步骤102,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图。
可以理解地,在所述至少两个地图为两个地图的情况下,电子设备在确定出第一地图之后,可以自动默认剩余地图第二地图,电子设备将这两个地图进行融合。
在所述至少两个地图为三个或者三个以上地图的情况下,电子设备既可以自动确定第二地图,还可以根据搜索引导信息确定出第二地图;其中,自动确定第二地图的方式,例如,分别确定除第一地图外的其他地图中的每一其他地图与第一地图的搜索区域之间的采样点匹配对,然后将采样点匹配对数目最多的其他地图确定为第二地图。
然而,这种自动确定第二地图的方式,由于需要确定搜索区域中的第一采样点分别与其他地图中每一采样点之间的相似程度(例如欧式距离),因此计算量较大。举例来说,所述至少两个地图分别为地图1、地图2和地图3,其中地图1为第一地图,假设第一地图的搜索区域中包含1000个第一采样点,地图2包含10 5个采样点,地图3包括10 7个采样点;那么,需要确定所述1000个第一采样点中的第i个第一采样点与地图2中每一采样点之间的欧式距离,并基于每一欧式距离,从中确定与所述第i个第一采样点相匹配的采样点,这样,搜索地图2中与每一第一采样点相匹配的采样点,需要计算1000×10 5次欧式距离;同样地,搜索地图3中与每一第一采样点相匹配的采样点,需要计算1000×10 7次欧式距离。
相对而言,根据搜索引导信息确定出第二地图的方式比较简单。例如,搜索引导信息包括触摸操作对应的第一触摸区域时,电子设备根据第一触摸区域在第一地图上的位置,确定搜索区域。
步骤103,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点。
可以理解地,对应区域即为在第二地图中与第一地图中的搜索区域相似或相同的区域。在自动确定出第二地图的情况下,电子设备可以根据搜索区域与第二地图之间的采样点匹配对集合,确定对应区域。例如,将第二地图中与搜索区域中的第一采样点相匹配的采样点,确定为对应区域中的采样点。
在根据搜索引导信息确定出第二地图的情况下,电子设备可以根据搜索引导信息,确定对应区域。例如,搜索引导信息包括触摸操作对应的第二触摸区域时,电子设备根据第二触摸区域在第二地图上的位置,确定所述对应区域。
步骤104,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点。
可以理解地,相匹配的目标点,即为第二采样点中,与第一采样点的属性信息相同或相近的采样点。例如,目标点为对应区域的第二采样点与第一采样点的属性信息之间的欧式距离小于特定阈值的点,或者,欧式距离最小的点。所述属性信息至少包括坐标信息;在一些实施例中,所述属性信息还可以包括图像特征信息。
步骤105,根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
可以理解地,两个地图之间的融合实际上主要是将两个地图之间相匹配的采样点(也就是属性相同或相近的两个采样点)融合成一个点,而其余点在统一为同一坐标系下之后进行简单的合并。举例来说,将第一采样点和与第一采样点相匹配的目标点称之为采样点匹配对。假设属于采样点匹配对的第一采样点为A1至A100,第一地图中剩余采样点为A101至A10001;属于采样点匹配对的第二采样点为B1至B100,第二地图中剩余采样点为B101至B100001,则在进行地图融合时,得到的目标融合地图包括:每一匹配对融合为一个点的属性信息,以及点A101至A10001以及点B101至B100001被统一至同一坐标系下之后的属性信息。
在本申请实施例中,将所述目标点的属性信息与相匹配的第一采样点的属性信息进行融合的方式有很多种。例如,以属性信息包括三维坐标为例,将相匹配的两个点的三维坐标转换到同一坐标系下之后,取坐标的均值或者中值作为融合后的点的坐标。举例来说,相匹配的A、B两点的坐标转换到同一坐标系下之后,分别对应的坐标为A(x a,y a,z a)和B(x b,y b,z b),融合为一个点之后,该点的坐标为
Figure PCTCN2020125837-appb-000001
在本申请实施例中,在将第一地图和第二地图进行融合时,先根据获取的搜索引导信息,确定第一地图的搜索区域;然后,从第一地图的部分区域(即所述搜索区域)和第二地图的部分区域(即所述对应区域)中搜索两者之间相匹配的采样点;这样:
一方面,由于大大缩小了搜索采样点匹配对的范围,从而能够降低搜索过程中的计算量,进而提高地图融合的处理速度,以使本申请实施例所述的地图融合方法不仅能够在具有大型处理器的服务器或者台式计算机等设备上运行,还能够在具有小型处理器的手机或者平板电脑等移动设备上实施;
另一方面,能够扩展地图融合的应用场景,既能够适用于多人建图的AR场景,又能够在禁止接入互联网的区域或者隐私区域实现地图构建和地图融合;
又一方面,还可以确保地图融合的实时性,使得地图融合过程不仅可以在离线建图时进行,还能够在在线定位时进行;
再一方面,在本申请实施例中,电子设备能够获取到用户输入的搜索引导信息,说明当前显示的至少两个地图中存在可以融合的两个地图,因此融合过程不会失败。而如果没有引入搜索引导信息,也就是电子设备自动从所述至少两个地图中确定具有重叠区域的两个地 图,无论显示的多个地图中是否存在可以融合的两个地图,电子设备都需要经过自动搜索过程;如此,在不存在可以融合的两个地图的情况下,会导致不必要的计算开销。
在一些实施例中,对于上述步骤102和步骤103,即,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域,电子设备可以根据所述搜索引导信息,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图中的对应区域。
可以理解地,通常情况下,在进行地图融合之前,首先要从多个地图中确定出两个具有重叠区域的地图(即所述第一地图和所述第二地图),然后将这两个地图的重叠区域进行融合,其处理过程非常复杂,尤其是在确定能够融合的两个地图的过程。这是因为,电子设备首先需要将地图进行两两匹配,在匹配的过程中,需要计算地图中每一采样点与另一地图中每一采样点之间的欧式距离,计算开销巨大。
举例来说,所述至少两个地图为地图A、地图B、地图C和地图D,其中,地图A具有10 6个采样点,地图B具有10 7个采样点,地图C具有10 8个采样点,地图D具有10 5个采样点。那么,电子设备自动确定出两个具有重叠区域的地图,需要确定地图A与地图B、地图A与地图C、地图A与地图D、地图B与地图C、地图B与地图D、以及地图C与地图D之间的采样点匹配对,这样,就需要计算(10 6×10 7+10 6×10 8+10 6×10 5+10 7×10 8+10 7×10 5+10 8×10 5)次欧式距离,可见计算量非常大;并且,即使地图A至地图D中没有可以融合的地图,电子设备也需要处理上述计算过程,从而导致不必要的计算开销。
而在本申请实施例中,电子设备直接根据所述搜索引导信息,从所述至少两个地图中除所述第一地图外的其他地图中,确定出待融合的第二地图,并进一步确定出所述第二地图上的对应区域。如此,可以大大降低电子设备查找待融合的第二地图的计算量,并降低确定采样点匹配对的计算量;并且,电子设备能够获取到用户输入的搜索引导信息,说明当前显示的至少两个地图中存在可以融合的两个地图,因此融合过程不会失败,更不会经历不必要的计算开销。
举例来说,所述至少两个地图为上述例子所述的地图A、地图B、地图C和地图D。根据所述搜索引导信息可以直接确定待融合第一地图和第二地图分别为地图B和地图D,且还可以确定出地图B上的搜索区域和地图D上的对应区域。假设地图B上的搜索区域中的采样点数目为10 3,地图D上的对应区域中的采样点的数目为1.5×10 3,那么电子设备只需要计算(10 3×1.5×10 3)次欧式距离即可,可见,相比于上述自动确定两个具有重叠区域的地图的方式,计算量非常小;如此,可以大大降低地图融合的计算开销,从而有效提高地图融合的实时性。
如前述实施例所述,搜索引导信息可以是基于用户在所述电子设备中输入的触摸操作,而获取的第一触摸区域;基于此,本申请实施例再提供一种地图融合方法,所述方法可以包括以下步骤201至步骤208:
步骤201,在地图显示区域显示至少两个地图。
所述至少两个地图既可以是同一电子设备生成的,还可以是不同电子设备生成的。例如,多个电子设备并行构建不同物理空间内的地图。在地图显示区域显示至少两个地图,是为了使用户能够指出哪两个地图的某个区域是重叠的或者相似的。
步骤202,在所述地图显示区域接收触摸操作。
在本申请实施例中,对于触摸操作的类型不做限定,触摸操作的类型可以是多种多样的。例如,触摸操作可以是单指触摸、多指触摸、单指滑动、双指捏合、双指向两边滑开等;用户可以在某个地图的显示区域进行上述触摸操作,以告知电子设备该地图上的搜索区域为哪个区域;另外,用户还可以在地图显示区域上进行双指捏合或者多指触摸,告知电子设备要将哪两个地图进行融合。
步骤203,根据所述触摸操作对应的第一触摸区域,从所述至少两个地图中确定出所述 第一地图;
步骤204,根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域。
举例来说,如图2所示,在电子设备的地图显示区域20上显示地图21、地图22和地图23;用户双指触摸地图21的区域211和212时,电子设备可以根据触摸区域确定接收触摸操作的地图为地图21,因此将地图21确定为待融合的第一地图;并确定触摸的这两个区域为第一地图的搜索区域。
步骤205,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图;
步骤206,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点。
如前述实施例所述,电子设备可以自动确定出第二地图以及第二地图中的对应区域,还可以根据搜索引导信息确定第二地图,进而确定第二地图中的对应区域。所述搜索引导信息还可以包括触摸操作对应的第二触摸区域,电子设备在实现步骤205和步骤206时,可以根据所述触摸操作对应的第二触摸区域,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图;根据所述第二触摸区域在所述第二地图上的位置,确定所述对应区域。
例如图3所示,在电子设备的地图显示区域30显示地图31、地图32和地图33;用户双指捏合地图31的区域311和地图33的区域331时,电子设备可以根据触摸区域确定接收触摸操作的地图为地图31和地图33,因此将这两个地图分别确定为待融合的第一地图和第二地图,并根据第一触摸区域在地图31上的位置,确定区域311为搜索区域,根据第二触摸区域在地图33上的位置,确定区域331为对应区域。
步骤207,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点。
在一些实施例中,电子设备确定所述搜索区域中第i个第一采样点的属性信息与所述对应区域中每一第二采样点的属性信息之间的相似度,以得到相似度集合,i为大于0的整数;将所述相似度集合中相似度满足特定条件的第二采样点,确定为目标点。
通过步骤207实现采样点匹配对的查找,好处在于:一方面,计算第一地图的搜索区域中的采样点与所述对应区域中的每一采样点之间的相似程度即可,而无需遍历第一地图中的每一采样点,从而降低查找采样点匹配对的计算量;另一方面,确定与第二地图的对应区域中的每一采样点之间的相似程度即可,而无需遍历第二地图中的每一采样点,从而进一步降低查找采样点匹配对的计算量。
步骤208,根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
在本申请实施例中,提供一种获取搜索引导信息的方式,即基于用户在电子设备上输入的触摸操作,获取对应的第一触摸区域,从而确定出待融合的第一地图和第一地图上的搜索区域;这种方式相比于其他获取方式,实现方式简单,在检测到触摸操作对应的触摸区域,即可确定出第一地图和第一地图上的搜索区域,因此可以进一步降低地图融合的处理复杂度,从而提高地图融合的处理效率。
如前述实施例所述,搜索引导信息可以是用户输入的语音指令,基于此,本申请实施例再提供一种地图融合方法,所述方法可以包括以下步骤301至步骤308:
步骤301,在地图显示区域显示所述至少两个地图和标注信息,所述标注信息用于标注不同的显示子区域。
在一些实施例中,电子设备可以预先将地图显示区域划分为多个网格,每个网格即为一个显示子区域。标注信息可以是网格所覆盖的地图区域上某个关键采样点的坐标信息,每个网格对应有至少一个标注信息。
在另一实施例中,标注信息还可以是自定义的网格标识,每个网格标识与对应网格所覆盖的地图区域具有对应关系,例如网格标识为网格在地图显示区域中的位置信息。
步骤302,接收所述语音指令;
步骤303,根据所述语音指令中的标注信息所标注的第一显示子区域,从所述至少两个地图中确定出所述第一地图。
在一些实施例中,语音指令可以携带一个或多个不同的标注信息。
步骤304,根据所述第一显示子区域在所述第一地图上的位置,确定所述搜索区域;
步骤305,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图;
步骤306,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点。
与前述实施例类似地,电子设备既可以自动确定出第二地图和搜索区域在第二地图中的对应区域,还可以根据语音指令携带的多个标注信息确定第二地图和第二地图上的对应区域。
在一些实施例中,电子设备可以根据语音指令中的标注信息所标注的第二显示子区域,从所述其他地图中确定出第二地图,并根据第二显示子区域在所述第二地图上的位置,确定所述对应区域。
可以理解地,用户可以根据显示的至少两个地图和标注信息之间的对应关系,输入语音指令,指挥电子设备进行地图融合。例如,图4所示,在地图显示区域显示地图41、地图42和地图43,并显示网格的横坐标轴和纵坐标轴,用户可以根据横坐标轴和纵坐标轴上的坐标值,指示地图中的哪些区域为相似区域。例如语音指令为“网格(6,2)、网格(14,2)和网格(15,2)为重叠区域”,此时,电子设备在识别出该语音指令中携带的标注信息之后,根据网格(6,2)标注的网格区域,确定对应的地图41为待融合的第一地图,且根据网格(6,2)在地图41上的位置,确定搜索区域;电子设备还根据网格(14,2)和网格(15,2)标注的网格区域,确定对应的地图43为待融合的第二地图,且根据网格(14,2)和网格(15,2)在地图43上的位置,确定对应区域。
步骤307,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
步骤308,根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
在本申请实施例中,提供另一种获取搜索引导信息的方式,即基于用户输入的语音指令中携带的标注信息,确定第一地图和第一地图的搜索区域;这种方式相比于其他获取方式,无需用户进行触摸操作,即可引导电子设备进行地图融合;如此,所述地图融合方法可以应用于不具有触摸屏的电子设备或者触摸屏出现故障的电子设备中,从而扩展了地图融合的应用场景,降低了地图融合的实现成本。
如前述实施例所述,搜索引导信息可以是基于用户的手势操作而确定的手势信息,基于此,本申请实施例再提供一种地图融合方法,所述方法可以包括以下步骤401至步骤407:
步骤401,在地图显示区域显示所述至少两个地图;
步骤402,识别手势操作包含的手势信息;
例如,用户可以双指分别指向两个地图的不同区域,从而指示待融合的第一地图的搜索区域和第二地图的对应区域;
步骤403,根据所述手势信息和所述地图显示区域,从所述至少两个地图中确定第一地图的搜索区域。
在一些实施例中,电子设备可以识别手势操作在所述地图显示区域上的手势指向;根据所述手势指向,从所述至少两个地图中确定所述第一地图;根据所述手势指向对应在所述第一地图上的位置,确定所述搜索区域。
步骤404,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图;
步骤405,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括 多个第二采样点。
与前述实施例类似地,电子设备既可以自动确定第二地图和第二地图中的对应区域,还可以根据手势信息确定第二地图上的对应区域。例如,电子设备识别手势信息中包括第一手指指向和第二手指指向,则根据这两个指向对应的显示区域,确定所述搜索区域和所述对应区域。
步骤406,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
步骤407,根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
在本申请实施例中,提供了又一种获取搜索引导信息的方式,即基于用户输入的手势操作,获取手势信息;并根据手势信息,确定第一地图和第一地图的搜索区域;这种方式相比于其他获取方式,无需用户进行触摸操作,也无需输入语音指令,即可引导电子设备进行地图融合;如此,一方面,所述地图融合方法可以应用于不具有触摸屏和语音识别装置的电子设备,从而扩展了地图融合的应用场景,降低了地图融合的实现成本;另一方面,所述地图融合方法还可以应用于触摸屏和/或语音识别装置发生故障的电子设备中,从而使得电子设备即使在触摸屏和/或语音识别装置发生故障的情况下,也能够继续进行地图构建和地图融合,从而提高了地图融合的实现可靠性。
如前述实施例所述,搜索引导信息可以是电子设备在用户注视地图显示区域时而获得的眼部特征信息,基于此,本申请实施例再提供一种地图融合方法,所述方法可以包括以下步骤501至步骤509:
步骤501,在地图显示区域显示所述至少两个地图;
步骤502,获取用户的眼部特征信息。
在一些实施例中,所述眼部特征信息可以包括观察方向和用户眼球在观察方向上停止转动的时长。
步骤503,根据所述眼部特征信息,确定所述用户的眼球在所述地图显示区域上的注视区域。
在一些实施例中,电子设备可以利用视线追踪仪检测用户眼球的转动和观察方向;并确定用户眼球在观察方向上停止转动的时长是否超出特定阈值;如果是,则将所述观察方向上在地图显示区域对应的子区域确定为注视区域。
步骤504,根据所述注视区域,从所述至少两个地图中确定出所述第一地图;
步骤505,根据所述注视区域在所述第一地图上的位置,确定所述搜索区域;
步骤506,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图;
步骤507,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点。
与前述实施例类似地,电子设备既可以自动确定出第二地图和第二地图上的对应区域,还可以根据眼部特征信息包含的另一观察方向和用户眼球在所述另一观察方向上停止转动的时长,确定出第二地图和第二地图上的对应区域。
步骤508,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
步骤509,根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
在本申请实施例中,提供了再一种获取搜索引导信息的方式,即基于用户的眼部特征信息,确定第一地图和第一地图的搜索区域;这种方式相比于其他获取方式,无需用户进行触 摸操作,也无需输入语音指令或者进行手势操作,即可引导电子设备进行地图融合;如此,所述地图融合方法可以应用于具有视线追踪仪的电子设备,从而扩展了地图融合的应用场景。
在本申请实施例中,电子设备可以根据上述任一获取搜索引导信息的方式进行地图融合。用户可以选择使用哪种输入方式引导电子设备进行地图融合。例如,用户选择通过触摸操作、语音指令、手势操作或者眼部操作,引导电子设备确定出第一地图的搜索区域和第二地图的对应区域。
本申请实施例再提供一种地图融合方法,所述方法可以包括以下步骤601至步骤606:
步骤601,根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;
步骤602,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图;
步骤603,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;
步骤604,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
步骤605,将所述第一地图与所述第二地图进行初步融合,得到初始融合地图。
可以理解地,所谓初步融合,例如如下实施例步骤705至步骤707,一般是将第一地图中的采样点的坐标和第二地图中的采样点的坐标统一至一个坐标系下,然后进行简单的合并,得到并集,即初始融合地图。
步骤606,将所述初始融合地图中的采样点匹配对的属性信息,融合为一个采样点的属性信息,从而得到目标融合地图。
需要说明的是,初始融合地图中目标点的属性信息中的坐标为转换后的坐标,第一采样点的属性信息中的坐标也是转换后的坐标。
本申请实施例再提供一种地图融合方法,所述方法可以包括以下步骤701至步骤709:
步骤701,根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;
步骤702,从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图;
步骤703,确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;
步骤704,从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
步骤705,将所述第一地图中的第一采样点的局部坐标,转换至全局坐标系下,得到所述第一采样点的初始全局坐标;
步骤706,将所述第二地图中的第二采样点的局部坐标,转换至所述全局坐标系下,得到所述第二采样点的初始全局坐标;
步骤707,将每一所述第一采样点的初始全局坐标和每一所述第二采样点的初始全局坐标进行合并,得到所述初始融合地图。
在一些实施例中,电子设备还可以通过以下方式得到所述初始融合地图,即,确定所述第一地图的参考坐标系相对于所述第二地图的参考坐标系的坐标转换关系;以所述第二地图的参考坐标系为全局坐标系,根据所述坐标转换关系,将所述第一地图中的每一第一采样点的坐标转换为初始全局坐标;将每一所述第一采样点的初始全局坐标添加至所述第二地图中,得到初始融合地图。
步骤708,对所述采样点匹配对中的第一采样点的初始全局坐标进行优化,得到目标全局坐标;
在一些实施例中,根据每一所述第一采样点的初始全局坐标与相匹配的目标点的初始全局坐标,确定对应的第一采样点的重投影误差;迭代调整每一所述采样点匹配对中每一所述第一采样点的初始全局坐标,直到每一所述第一采样点的重投影误差小于或等于特定阈值为止,将最后一次迭代输入的第一采样点的全局坐标,确定为所述目标全局坐标。
在实现时,电子设备可以采用莱文贝格-马夸特方法(Levenberg–Marquardtalgorithm,LM)算法进行迭代,以确定出最优的目标全局坐标。
步骤709,将所述初始融合地图中的每一所述目标点的全局坐标与相匹配的第一采样点的目标全局坐标分别融合为一个采样点的全局坐标,从而得到目标融合地图。
举例来说,在初始融合地图中,采样点1与采样点2为采样点匹配对,电子设备可以将这两点的全局坐标融合为1个采样点(称为采样点12)的全局坐标;采样点3与采样点4为采样点匹配对,电子设备可以将这两点的全局坐标融合为1个采样点(称为采样点34)的全局坐标;采样点5与采样点6为采样点匹配对,电子设备可以将这两点的全局坐标融合为1个采样点(称为采样点56)的全局坐标;然后将采样点12的全局坐标、采样点34的全局坐标、采样点56的全局坐标添加至初始融合地图中,并将初始融合融合地图中的采样点1、至6剔除,从而得到目标融合地图。
在本申请实施例中,在得到初始融合地图之后,需要对其中与所述目标点相匹配的第一采样点的初始全局坐标进行优化,得到目标全局坐标;并将每一所述目标点的全局坐标与相匹配的第一采样点的目标全局坐标分别融合为一个采样点的全局坐标;如此,可以减小第一采样点的重投影误差,从而减小地图融合后,融合处出现的交错偏移。
通过传感器采集周围环境的观测数据(比如拍摄图像、点云数据),以对采集到的观测数据进行跟踪与匹配;根据观测数据之间的关系进行三角化,生成空间三维点云,构建增量式地图。一般来说,地图融合是通过提取多个地图的关键帧,将关键帧进行匹配,将超过匹配帧数阈值的两个地图进行关联,并融合成一个地图。然而,这样既需要遍历所有地图,寻找重叠区域最多的两个地图,还需要遍历关键帧中的每一采样点,所以计算量相当大,从而影响地图融合的实时性。
另一方面,在融合精度上,通常缺少优化而导致地图融合会有偏移的情况。并且,庞大的计算量,导致地图融合通常只能够在服务器上运行,如果终端设备未接入互联网或者处于隐私区域,则无法进行建图和地图模型融合。
基于此,下面将说明本申请实施例在一个实际的应用场景中的示例性应用。
本申请实施例提供了一种多地图融合的方案,可以在多个移动端同时进行建图,并且可以在任何已经建图成功的移动端进行地图融合。而且通过用户指引以及地图优化来减少地图融合的误差,具体的实现方案如图5所示,可以包括以下步骤S1至步骤S5:
步骤S1:使用即时定位与地图构建(Simultaneous Localization And Mapping,SLAM)技术进行建图。在移动端运行SLAM模块,SLAM模块从相机获取的视频帧序列中提取ORB(Oriented FAST and Rotated BRIEF)特征,然后进行特征匹配、跟踪和三角化等处理之后,生成三维点云。
在一些实施例中,移动端会从视频帧序列中挑选一些帧作为关键帧,这些关键帧即为真实场景在不同位姿处的快照。关键帧包含了位姿信息与地图点云的观测关系,这些关键帧构成了位姿图顶点,它们之间的连接构成了位姿图的边,两个关键帧之间共视的地图点的个数就是这条边的权值。ORB特征是将FAST特征点的检测方法与BRIEF特征描述子结合起来,并在它们原来的基础上做了改进与优化。ORB算法提出使用矩(moment)法来确定FAST特征点的方向。也就是说通过矩来计算特征点以r为半径范围内的质心,特征点坐标到质心形成一个向量作为该特征点的方向。
A、特征匹配:
由于没有考虑图像形变,匹配过程对运动模糊和由于相机旋转比较敏感,因此在初始化 过程中对用户的运动状态要求比较严格。前一幅图像的位姿用于指导数据关联流程,它可以帮助从当前地图中提取可见的子图,从而减少盲目投影整个地图的计算开销;另外,它还可以为当前图像位姿提供先验,这样特征匹配只在很小的区域内进行搜索,而不是搜索整个图像。然后建立局部地图点和当前特征点的匹配关系。
B、三角化:
三角化的目的是求解图像帧对应的空间点的三维坐标。三角化最早由高斯提出,并应用于测量学中。简单来讲就是:在不同的位置观测同一个三维点P(x,y,z),已知在不同位置处观察到的三维点的二维投影点X1(x1,y1),X2(x2,y2),利用三角关系,恢复出三维点的深度信息z。
C、生成三维点云(即融合前的地图)
将三角化后得到的空间点坐标信息,位姿信息等参数保存成点云格式,并实时更新点云文件。
步骤S2:地图上传与下载
多台移动端设备通过无线局域网(wifi)连接,每个设备有一个设备标识,通过设备标识可以将多台移动设备进行连接。用户可以触发下载特定设备的地图,也可以将本机的地图上传到任意设备。
步骤S3:用户引导设置地图融合位置
当前用户A的移动端通过建图生成了地图map-A,经过步骤S2,下载了用户B的移动端所生成的地图map-B。地图map-A和地图map-B可以通过移动端可视化,用户A通过点击地图的接合处,将地图map-A和地图map-B锚接在一起,并形成一个“粗”的融合模型。
步骤S4:地图融合
通过对两个地图的坐标变换参数进行变换后,两个地图可以统一到同一个坐标系下。接下来描述两个地图进行融合的过程:
在同一个地图中,关键帧和地图点之间是存在联系的,即某一关键帧上可以观察到哪些地图点,以及某个地图点出现在哪些关键帧中。仅将两个地图坐标系统一并不能真正将两个地图融合在一起,两个地图的关键帧和地图点并没有相互联系。因此,地图融合的关键就是将新(旧)地图中的关键帧和旧(新)地图中的地图点关联。通过搜索匹配已经得到两个地图之间的匹配点,用地图的地图点替代新建地图的地图点,相应的,原本新地图中与这些匹配点中的新地图点关联的关键帧将转换成与匹配点中旧地图点关联,从而达到了地图融合的目的,地图融合结束后,后续的跟踪建图中将会同时使用两个地图的关键帧和地图点信息进行优化。
步骤S5:光束平差法(Bundle Adjustment,BA)优化
BA本质是一个优化模型,其目的是最小化重投影误差,而用BA来优化的目的是减少地图融合后,融合处的交错偏移。
BA优化主要是利用LM算法,并在此基础上利用BA模型的稀疏性质来进行计算的。LM算法是最速下降法(梯度下降法)和Gauss-Newton的结合体。梯度下降法就是顺着梯度的负方向去迭代寻找使函数最小的变量值。
本申请实施例提供了一种多地图融合的方案,可以在多个移动端同时进行建图,也可以在任何已经建图成功的移动端进行地图融合。这种方式提高了建图的效率和实时性;另外,引入了融合处局部BA的方法,提升了地图融合的准确性,地图融合处不会出现交错偏移的情况。并且引入了用户的引导信息,支持用户对融合的初始位置的手动设置,大大提高了模型融合的成功率。
本申请实施例提供一种多地图融合的方案,同时解决了建图效率、实时性以及地图融合的精确性的问题。多个移动端设备同时建图,提升建图效率和实时性;引入融合处局部BA的方法,提升地图融合的准确性;引入用户的引导信息,支持用户手动设置融合位置,提升地图融合的成功率。
在一些实施例中,还可以在云端或边缘计算服务器侧进行建图与地图融合;其中,1.需要搭建云服务器或者边缘计算服务器;2.建图与融合的实时性;3.移动端数据与云端数据的交互。
基于前述的实施例,本申请实施例提供一种地图融合装置,该装置包括所包括的各模块,可以通过电子设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图6为本申请实施例地图融合装置的结构示意图,如图6所示,所述地图融合装置600包括确定模块601、匹配模块602和融合模块603,其中:
确定模块601,用于:根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;
匹配模块602,用于从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
融合模块603,用于根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
在一些实施例中,确定模块601,用于:根据所述搜索引导信息,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图中的对应区域。
在一些实施例中,所述搜索引导信息包括触摸操作对应的第一触摸区域,确定模块601,用于:在地图显示区域显示所述至少两个地图;在所述地图显示区域接收触摸操作;根据所述触摸操作对应的第一触摸区域,从所述至少两个地图中确定出所述第一地图;根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域。
在一些实施例中,所述搜索引导信息包括触摸操作对应的第二触摸区域,确定模块601,用于:根据所述触摸操作对应的第二触摸区域,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图;根据所述第二触摸区域在所述第二地图上的位置,确定所述对应区域。
在一些实施例中,所述搜索引导信息为语音指令,确定模块601,用于:在地图显示区域显示所述至少两个地图和标注信息,所述标注信息用于标注不同的显示子区域;接收所述语音指令;根据所述语音指令中的标注信息所标注的第一显示子区域,从所述至少两个地图中确定出所述第一地图;根据所述第一显示子区域在所述第一地图上的位置,确定所述搜索区域。
在一些实施例中,所述搜索引导信息为手势信息,确定模块601,用于:在地图显示区域显示所述至少两个地图;识别手势操作包含的手势信息;根据所述手势信息和所述地图显示区域,从所述至少两个地图中确定第一地图的搜索区域。
在一些实施例中,所述搜索引导信息为眼部特征信息,确定模块601,用于:在地图显示区域显示所述至少两个地图;获取用户的眼部特征信息;根据所述眼部特征信息,确定所述用户的眼球在所述地图显示区域上的注视区域;
根据所述注视区域,从所述至少两个地图中确定出所述第一地图;根据所述注视区域在所述第一地图上的位置,确定所述搜索区域。
在一些实施例中,融合模块603,用于:将所述第一地图与所述第二地图进行初步融合,得到初始融合地图;将所述初始融合地图中的采样点匹配对的属性信息,融合为一个采样点的属性信息,从而得到目标融合地图。
在一些实施例中,融合模块603,用于:将所述第一地图中的第一采样点的局部坐标,转换至全局坐标系下,得到所述第一采样点的初始全局坐标;将所述第二地图中的第二采样 点的局部坐标,转换至所述全局坐标系下,得到所述第二采样点的初始全局坐标;将每一所述第一采样点的初始全局坐标和每一所述第二采样点的初始全局坐标进行合并,得到所述初始融合地图。
在一些实施例中,融合模块603,用于:确定所述第一地图的参考坐标系相对于所述第二地图的参考坐标系的坐标转换关系;以所述第二地图的参考坐标系为全局坐标系,根据所述坐标转换关系,将所述第一地图中的每一第一采样点的坐标转换为初始全局坐标;将每一所述第一采样点的初始全局坐标融合到所述第二地图中,得到初始融合地图。
在一些实施例中,融合模块603,用于:对所述采样点匹配对中的第一采样点的初始全局坐标进行优化,得到目标全局坐标;将所述初始融合地图中的每一所述目标点的全局坐标与相匹配的第一采样点的目标全局坐标分别融合为一个采样点的全局坐标,从而得到目标融合地图。
在一些实施例中,融合模块603,用于:根据每一所述第一采样点的初始全局坐标与相匹配的目标点的初始全局坐标,确定对应的第一采样点的重投影误差;迭代调整每一所述采样点匹配对中每一所述第一采样点的初始全局坐标,直到每一所述第一采样点的重投影误差小于或等于特定阈值为止,将最后一次迭代输入的第一采样点的全局坐标,确定为所述目标全局坐标。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的地图融合方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
对应地,本申请实施例提供一种电子设备,图7为本申请实施例电子设备的一种硬件实体示意图,如图7所示,该电子设备700的硬件实体包括:包括存储器701和处理器702,所述存储器701存储有可在处理器702上运行的计算机程序,所述处理器702执行所述程序时实现上述实施例中提供的地图融合方法中的步骤。
存储器701用于存储由处理器702可执行的指令和应用,还可以缓存待处理器702以及电子设备700中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
对应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的地图融合方法中的步骤。
本申请实施例提供一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行本申请实施例任一所述地图融合方法中的步骤。
这里需要指出的是:以上存储介质、芯片和终端设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质、芯片和终端设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”或“一些实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”或“在一些实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个 实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如对象A和/或对象B,可以表示:单独存在对象A,同时存在对象A和对象B,单独存在对象B这三种情况。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者设备中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的触摸屏系统的实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个模块或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或模块的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的模块可以是、或也可以不是物理上分开的,作为模块显示的部件可以是、或也可以不是物理模块;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部模块来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能模块可以全部集成在一个处理单元中,也可以是各模块分别单独作为一个单元,也可以两个或两个以上模块集成在一个单元中;上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种地图融合方法,所述方法包括:
    根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;
    从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;
    从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与所述目标点相匹配的第一采样点;
    根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
  2. 根据权利要求1所述的方法,其中,所述从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域,包括:
    根据所述搜索引导信息,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图中的对应区域。
  3. 根据权利要求1所述的方法,其中,所述搜索引导信息包括触摸操作对应的第一触摸区域,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:
    在地图显示区域显示所述至少两个地图;
    在所述地图显示区域接收触摸操作;
    根据所述触摸操作对应的第一触摸区域,从所述至少两个地图中确定出所述第一地图;
    根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域。
  4. 根据权利要求3所述的方法,其中,所述根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域,包括:
    将所述第一触摸区域在所述第一地图上对应的区域,确定为所述搜索区域。
  5. 根据权利要求3所述的方法,其中,所述搜索引导信息包括触摸操作对应的第二触摸区域,所述从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域,包括:
    根据所述触摸操作对应的第二触摸区域,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图;
    根据所述第二触摸区域在所述第二地图上的位置,确定所述对应区域。
  6. 根据权利要求1所述的方法,其中,所述搜索引导信息为语音指令,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:
    在地图显示区域显示所述至少两个地图和标注信息,所述标注信息用于标注不同的显示子区域;
    接收所述语音指令;
    根据所述语音指令中的标注信息所标注的第一显示子区域,从所述至少两个地图中确定出所述第一地图;
    根据所述第一显示子区域在所述第一地图上的位置,确定所述搜索区域。
  7. 根据权利要求1所述的方法,其中,所述搜索引导信息为手势信息,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:
    在地图显示区域显示所述至少两个地图;
    识别手势操作包含的手势信息;
    根据所述手势信息和所述地图显示区域,从所述至少两个地图中确定第一地图的搜索区 域。
  8. 根据权利要求1所述的方法,其中,所述搜索引导信息为眼部特征信息,所述根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域,包括:
    在地图显示区域显示所述至少两个地图;
    获取用户的眼部特征信息;
    根据所述眼部特征信息,确定所述用户的眼球在所述地图显示区域上的注视区域;
    根据所述注视区域,从所述至少两个地图中确定出所述第一地图;
    根据所述注视区域在所述第一地图上的位置,确定所述搜索区域。
  9. 根据权利要求1至8任一项所述的方法,其中,所述根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图,包括:
    将所述第一地图与所述第二地图进行初步融合,得到初始融合地图;
    将所述初始融合地图中的采样点匹配对的属性信息,融合为一个采样点的属性信息,从而得到目标融合地图。
  10. 根据权利要求9所述的方法,其中,所述将所述第一地图与所述第二地图进行初步融合,得到初始融合地图,包括:
    将所述第一地图中的第一采样点的局部坐标,转换至全局坐标系下,得到所述第一采样点的初始全局坐标;
    将所述第二地图中的第二采样点的局部坐标,转换至所述全局坐标系下,得到所述第二采样点的初始全局坐标;
    将每一所述第一采样点的初始全局坐标和每一所述第二采样点的初始全局坐标进行合并,得到所述初始融合地图。
  11. 根据权利要求9所述的方法,其中,所述将所述第一地图与所述第二地图进行初步融合,得到初始融合地图,包括:
    确定所述第一地图的参考坐标系相对于所述第二地图的参考坐标系的坐标转换关系;
    以所述第二地图的参考坐标系为全局坐标系,根据所述坐标转换关系,将所述第一地图中的每一第一采样点的坐标转换为初始全局坐标;
    将每一所述第一采样点的初始全局坐标融合到所述第二地图中,得到初始融合地图。
  12. 根据权利要求10或11所述的方法,其中,所述将所述初始融合地图中的采样点匹配对的属性信息融合为一个采样点的属性信息,从而得到目标融合地图,包括:
    对所述采样点匹配对中的第一采样点的初始全局坐标进行优化,得到目标全局坐标;
    将所述初始融合地图中的每一所述目标点的全局坐标与相匹配的第一采样点的目标全局坐标分别融合为一个采样点的全局坐标,从而得到目标融合地图。
  13. 根据权利要求12所述的方法,其中,所述对所述采样点匹配对中的第一采样点的初始全局坐标进行优化,得到目标全局坐标,包括:
    根据每一所述第一采样点的初始全局坐标与相匹配的目标点的初始全局坐标,确定对应的第一采样点的重投影误差;
    迭代调整每一所述采样点匹配对中每一所述第一采样点的初始全局坐标,直到每一所述第一采样点的重投影误差小于或等于特定阈值为止,将最后一次迭代输入的第一采样点的全局坐标,确定为所述目标全局坐标。
  14. 一种地图融合装置,包括:
    确定模块,用于:根据获取的搜索引导信息,从当前显示的至少两个地图中确定第一地图的搜索区域;其中,所述第一地图包括多个第一采样点;从所述至少两个地图中除所述第一地图外的地图中,确定出第二地图,并确定所述搜索区域在所述第二地图中的对应区域;其中,所述第二地图包括多个第二采样点;
    匹配模块,用于从所述对应区域的第二采样点中,确定出与所述搜索区域中第一采样点的属性信息相匹配的目标点,以得到采样点匹配对,所述采样点匹配对包括所述目标点和与 所述目标点相匹配的第一采样点;
    融合模块,用于根据得到的采样点匹配对,对所述第一地图和所述第二地图进行融合,得到目标融合地图。
  15. 根据权利要求14所述的装置,其中,所述确定模块,用于:
    根据所述搜索引导信息,从所述至少两个地图中除所述第一地图外的地图中,确定出所述第二地图中的对应区域。
  16. 根据权利要求14所述的装置,其中,所述搜索引导信息包括触摸操作对应的第一触摸区域,所述确定模块,用于:
    在地图显示区域显示所述至少两个地图;
    在所述地图显示区域接收触摸操作;
    根据所述触摸操作对应的第一触摸区域,从所述至少两个地图中确定出所述第一地图;
    根据所述第一触摸区域在所述第一地图上的位置,确定所述搜索区域。
  17. 根据权利要求14至16任一所述的装置,其中,所述融合模块,用于:
    将所述第一地图中的第一采样点的局部坐标,转换至全局坐标系下,得到所述第一采样点的初始全局坐标;
    将所述第二地图中的第二采样点的局部坐标,转换至所述全局坐标系下,得到所述第二采样点的初始全局坐标;
    将每一所述第一采样点的初始全局坐标和每一所述第二采样点的初始全局坐标进行合并,得到所述初始融合地图;
    将所述初始融合地图中的采样点匹配对的属性信息,融合为一个采样点的属性信息,从而得到目标融合地图。
  18. 一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至13任一项所述地图融合方法中的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至13任一项所述地图融合方法中的步骤。
  20. 一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如权利要求1至13中任一项所述地图融合方法中的步骤。
PCT/CN2020/125837 2019-11-27 2020-11-02 地图融合方法及装置、设备、存储介质 WO2021103945A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20894727.5A EP4056952A4 (en) 2019-11-27 2020-11-02 CARD FUSION METHOD, DEVICE, DEVICE AND STORAGE MEDIA
US17/824,371 US20220282993A1 (en) 2019-11-27 2022-05-25 Map fusion method, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911185884.1A CN110986969B (zh) 2019-11-27 2019-11-27 地图融合方法及装置、设备、存储介质
CN201911185884.1 2019-11-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/824,371 Continuation US20220282993A1 (en) 2019-11-27 2022-05-25 Map fusion method, device and storage medium

Publications (1)

Publication Number Publication Date
WO2021103945A1 true WO2021103945A1 (zh) 2021-06-03

Family

ID=70087556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125837 WO2021103945A1 (zh) 2019-11-27 2020-11-02 地图融合方法及装置、设备、存储介质

Country Status (4)

Country Link
US (1) US20220282993A1 (zh)
EP (1) EP4056952A4 (zh)
CN (1) CN110986969B (zh)
WO (1) WO2021103945A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061564A (zh) * 2021-11-01 2022-02-18 广州小鹏自动驾驶科技有限公司 一种地图数据的处理方法和装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110986969B (zh) * 2019-11-27 2021-12-28 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质
CN111831771B (zh) * 2020-07-09 2024-03-12 广州小鹏自动驾驶科技有限公司 一种地图融合的方法和车辆
CN111831776B (zh) * 2020-07-16 2022-03-11 广州小鹏自动驾驶科技有限公司 一种地图融合的方法及车辆、电子设备、存储介质
CN112183285B (zh) * 2020-09-22 2022-07-12 合肥科大智能机器人技术有限公司 一种变电站巡检机器人的3d点云地图融合方法和系统
CN113506368B (zh) * 2021-07-13 2023-03-24 阿波罗智能技术(北京)有限公司 地图数据融合方法、装置、电子设备、介质和程序产品
CN116775796B (zh) * 2023-08-16 2023-10-31 交通运输部水运科学研究所 一种多图层叠加的港区信息展示方法及系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260988A (zh) * 2015-09-09 2016-01-20 百度在线网络技术(北京)有限公司 一种高精地图数据的处理方法和装置
CN105488459A (zh) * 2015-11-23 2016-04-13 上海汽车集团股份有限公司 车载3d道路实时重构方法及装置
CN106272423A (zh) * 2016-08-31 2017-01-04 哈尔滨工业大学深圳研究生院 一种针对大尺度环境的多机器人协同制图与定位的方法
US20180066957A1 (en) * 2016-09-08 2018-03-08 Here Global B.V. Method and apparatus for providing trajectory bundles for map data analysis
CN108286976A (zh) * 2017-01-09 2018-07-17 北京四维图新科技股份有限公司 一种点云数据的融合方法和装置、以及混合导航系统
CN108827249A (zh) * 2018-06-06 2018-11-16 歌尔股份有限公司 一种地图构建方法和装置
CN109100730A (zh) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 一种多车协同快速建图方法
CN109141431A (zh) * 2018-09-07 2019-01-04 北京数字绿土科技有限公司 航带匹配方法、装置、电子设备和可读存储介质
CN110986969A (zh) * 2019-11-27 2020-04-10 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2723225A1 (en) * 2008-05-02 2009-11-05 Eyeic, Inc. System for using image alignment to map objects across disparate images
US9189852B2 (en) * 2012-02-02 2015-11-17 Google Inc. Method for manually aligning two digital images on mobile devices
CN103900583B (zh) * 2012-12-25 2018-02-27 联想(北京)有限公司 用于即时定位与地图构建的设备和方法
US9467660B1 (en) * 2014-03-31 2016-10-11 Amazon Technologies, Inc. Map generation using map features from user captured images
US20160379366A1 (en) * 2015-06-25 2016-12-29 Microsoft Technology Licensing, Llc Aligning 3d point clouds using loop closures
JP6910820B2 (ja) * 2017-03-02 2021-07-28 株式会社トプコン 点群データ処理装置、点群データ処理方法、点群データ処理用プログラム
US10937214B2 (en) * 2017-03-22 2021-03-02 Google Llc System and method for merging maps
CN109086277B (zh) * 2017-06-13 2024-02-02 纵目科技(上海)股份有限公司 一种重叠区构建地图方法、系统、移动终端及存储介质
CN112204343B (zh) * 2018-03-02 2024-05-17 辉达公司 高清晰地图数据的可视化
CN109341706B (zh) * 2018-10-17 2020-07-03 张亮 一种面向无人驾驶汽车的多特征融合地图的制作方法
CN109725327B (zh) * 2019-03-07 2020-08-04 山东大学 一种多机构建地图的方法及系统
CN110415174B (zh) * 2019-07-31 2023-07-07 达闼科技(北京)有限公司 地图融合方法、电子设备及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260988A (zh) * 2015-09-09 2016-01-20 百度在线网络技术(北京)有限公司 一种高精地图数据的处理方法和装置
CN105488459A (zh) * 2015-11-23 2016-04-13 上海汽车集团股份有限公司 车载3d道路实时重构方法及装置
CN106272423A (zh) * 2016-08-31 2017-01-04 哈尔滨工业大学深圳研究生院 一种针对大尺度环境的多机器人协同制图与定位的方法
US20180066957A1 (en) * 2016-09-08 2018-03-08 Here Global B.V. Method and apparatus for providing trajectory bundles for map data analysis
CN108286976A (zh) * 2017-01-09 2018-07-17 北京四维图新科技股份有限公司 一种点云数据的融合方法和装置、以及混合导航系统
CN109100730A (zh) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 一种多车协同快速建图方法
CN108827249A (zh) * 2018-06-06 2018-11-16 歌尔股份有限公司 一种地图构建方法和装置
CN109141431A (zh) * 2018-09-07 2019-01-04 北京数字绿土科技有限公司 航带匹配方法、装置、电子设备和可读存储介质
CN110986969A (zh) * 2019-11-27 2020-04-10 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061564A (zh) * 2021-11-01 2022-02-18 广州小鹏自动驾驶科技有限公司 一种地图数据的处理方法和装置
CN114061564B (zh) * 2021-11-01 2022-12-13 广州小鹏自动驾驶科技有限公司 一种地图数据的处理方法和装置

Also Published As

Publication number Publication date
US20220282993A1 (en) 2022-09-08
CN110986969A (zh) 2020-04-10
EP4056952A4 (en) 2023-01-18
CN110986969B (zh) 2021-12-28
EP4056952A1 (en) 2022-09-14

Similar Documents

Publication Publication Date Title
WO2021103945A1 (zh) 地图融合方法及装置、设备、存储介质
US11798237B2 (en) Method for establishing a common reference frame amongst devices for an augmented reality session
WO2021057742A1 (zh) 定位方法及装置、设备、存储介质
US20200097091A1 (en) Method and Apparatus of Interactive Display Based on Gesture Recognition
CN105190644B (zh) 用于使用触摸控制的基于图像的搜索的技术
CN110322500A (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
EP3090382B1 (en) Real-time 3d gesture recognition and tracking system for mobile devices
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
US20120054177A1 (en) Sketch-based image search
TW201346640A (zh) 影像處理裝置及電腦程式產品
BRPI1100726A2 (pt) dispositivo e método de controle de comunicação, e, programa
CN112150551A (zh) 物体位姿的获取方法、装置和电子设备
US9069415B2 (en) Systems and methods for finger pose estimation on touchscreen devices
TWI745818B (zh) 視覺定位方法、電子設備及電腦可讀儲存介質
KR102125212B1 (ko) 전자 필기 운용 방법 및 이를 지원하는 전자 장치
CN110349212A (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
US11574414B2 (en) Edge-based three-dimensional tracking and registration method and apparatus for augmented reality, and storage medium
JP2013164697A (ja) 画像処理装置、画像処理方法、プログラム及び画像処理システム
US9323346B2 (en) Accurate 3D finger tracking with a single camera
CN115471416A (zh) 目标识别方法、存储介质及设备
US11315265B2 (en) Fingertip detection method, fingertip detection device, and medium
Yousefi et al. 3D hand gesture analysis through a real-time gesture search engine
US20230419733A1 (en) Devices and methods for single or multi-user gesture detection using computer vision
Pears et al. Display registration for device interaction
US11809520B1 (en) Localized visual similarity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20894727

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020894727

Country of ref document: EP

Effective date: 20220610

NENP Non-entry into the national phase

Ref country code: DE