WO2021057745A1 - 地图融合方法及装置、设备、存储介质 - Google Patents

地图融合方法及装置、设备、存储介质 Download PDF

Info

Publication number
WO2021057745A1
WO2021057745A1 PCT/CN2020/116930 CN2020116930W WO2021057745A1 WO 2021057745 A1 WO2021057745 A1 WO 2021057745A1 CN 2020116930 W CN2020116930 W CN 2020116930W WO 2021057745 A1 WO2021057745 A1 WO 2021057745A1
Authority
WO
WIPO (PCT)
Prior art keywords
voxel
map
coordinate
local map
coordinates
Prior art date
Application number
PCT/CN2020/116930
Other languages
English (en)
French (fr)
Inventor
金珂
杨宇尘
马标
李姬俊男
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021057745A1 publication Critical patent/WO2021057745A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Definitions

  • the embodiments of the present application relate to electronic technology, and relate to, but are not limited to, map fusion methods, devices, equipment, and storage media.
  • indoor environment maps can be established through visual information, and the need for map fusion is inevitably encountered in the process of constructing indoor environment maps.
  • map construction after multiple data collections, multi-person or multi-machine collaboration to complete map construction it is necessary to synthesize multiple local maps corresponding to each local area into a global map.
  • the synthesized global map has a large fusion error, and the consistency between multiple local maps after synthesis is low.
  • the map fusion method, device, device, and storage medium provided by the embodiments of the present application can improve map fusion accuracy and reduce fusion errors, thereby improving the problem of low consistency between multiple local maps after fusion.
  • the technical solutions of the embodiments of the present application are implemented as follows:
  • the map fusion method provided by the embodiment of the present application includes: obtaining a first partial map set and a second partial map, the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is The coordinate system of the second local map is different; according to the first coordinate of the first voxel in each of the first local map and the second coordinate of the second voxel in the second local map, each of the The first local map and the second local map are fused to obtain a target global map; wherein the second coordinate of the second voxel is to update the initial coordinate of the second voxel according to a plurality of sample image pairs And obtained, the sample image pair includes a two-dimensional sample image and a depth sample image.
  • the map fusion device includes: a map acquisition module configured to acquire a first partial map set and a second partial map, the first partial map set including at least one first partial map, and the first partial map
  • the coordinate system of the map is different from the coordinate system of the second local map; the map fusion module is configured to use the first coordinate of the first voxel in each of the first local map and the second local map in the second local map.
  • the second coordinate of the voxel is fused with each of the first local map and the second local map to obtain the target global map; wherein, the second coordinate of the second voxel is based on a plurality of sample images Yes, it is obtained by updating the initial coordinates of the second voxel, and the sample image pair includes a two-dimensional sample image and a depth sample image.
  • the electronic device provided by the embodiment of the present application includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the map fusion provided by the embodiment of the present application when the processor executes the program Steps in the method.
  • the computer-readable storage medium provided by the embodiment of the present application has a computer program stored thereon, and the computer program implements the steps in the map fusion method provided by the embodiment of the present application when the computer program is executed by a processor.
  • the second coordinate of the second voxel in the second local map is obtained by updating (that is, correcting) the initial coordinate of the second voxel according to a plurality of sample image pairs;
  • the accuracy of the coordinates in the second local map can be improved, and higher map fusion accuracy can be obtained during map fusion, so that the fusion
  • Each coordinate value in the obtained global map of the target is smoother.
  • FIG. 1 is a schematic diagram of the implementation process of a map fusion method according to an embodiment of this application;
  • FIG. 2 is a schematic diagram of fusing multiple first partial maps into a second partial map according to an embodiment of the application
  • FIG. 3 is a schematic diagram of quantizing a specific physical space according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of the composition structure of a map fusion device according to an embodiment of the application.
  • FIG. 5 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the application.
  • first ⁇ second ⁇ third involved in the embodiments of this application is used to distinguish different objects, and does not represent a specific order for the objects. Understandably, “first ⁇ second ⁇ third” Where permitted, the specific order or sequence can be interchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • the embodiments of this application provide a map fusion method, which can be applied to electronic devices, which can be mobile phones, tablet computers, notebook computers, desktop computers, robots, drones, servers, etc., with information processing capabilities. equipment.
  • the functions implemented by the map fusion method can be implemented by a processor in the electronic device calling program codes.
  • the program code can be stored in a computer storage medium. It can be seen that the electronic device includes at least a processor and a storage medium.
  • FIG. 1 is a schematic diagram of the implementation process of the map fusion method according to the embodiment of this application. As shown in FIG. 1, the method may include the following steps S101 to S102:
  • Step S101 Obtain a first partial map set and a second partial map, the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is different from the coordinate system of the second partial map .
  • the first partial map set includes one or more different first partial maps.
  • the plurality of different first partial maps may be partial maps constructed by different electronic devices or the same electronic device in different regions through a plurality of sample image pairs collected by a built-in image collection module.
  • the process of constructing the first partial map and the second partial map can be implemented through step S801 to step S803 in the following embodiment.
  • Multiple different electronic devices may send the constructed first partial map to the electronic device that executes the map fusion method in the form of crowdsourcing.
  • the type of the coordinate system of the second local map is not limited, and the coordinate system may be a coordinate system suitable for a certain application scenario.
  • the coordinate system of the second local map is a world coordinate system.
  • the coordinate system can also be a custom coordinate system.
  • Step S102 according to the first coordinates of the first voxel in each of the first local maps and the second coordinates of the second voxels in the second local map, combine each of the first local maps and the The second local map is fused to obtain the target global map; wherein, the second coordinate of the second voxel is obtained by updating the initial coordinates of the second voxel according to a plurality of sample image pairs, and the sample The image pair includes a two-dimensional sample image and a depth sample image.
  • first voxel does not specifically refer to a specific voxel in the first partial map, and each voxel in the first partial map is called the first voxel.
  • second voxel does not specifically refer to a specific voxel in the second local map, and each voxel in the second local map can be called a second voxel.
  • the "first" of the "first voxel” and the "second" of the "second voxel” are used to distinguish that these two voxels are voxels in different local maps, and nothing else Special meaning.
  • the first coordinate and the second coordinate do not specifically refer to a certain coordinate, but to distinguish the coordinates of voxels in different local maps.
  • the so-called local map and global map are relative terms, the former refers to a part of the space in a specific physical space, and the latter refers to the whole physical space.
  • the specific physical space as an office building as an example
  • the local map refers to a map corresponding to a certain floor or a room on a certain floor
  • the global map refers to the map corresponding to the office building.
  • the electronic device may implement step S102, or step S302 to step S304, or step S616 to step S619 in the following embodiments.
  • the second coordinate of the second voxel in the second local map is obtained by updating the initial coordinates of the second voxel multiple times according to multiple sample image pairs, instead of centering the sample image
  • the second coordinate of the second voxel in the second local map is obtained by updating the initial coordinates of the second voxel multiple times according to multiple sample image pairs, instead of centering the sample image
  • the process of updating ie, correcting
  • the accuracy of the coordinates of the second voxel in the second local map can be greatly improved, so that when the first local map is integrated based on the second local map , Can reduce the fusion error, so as to obtain a higher map fusion accuracy.
  • the embodiment of the present application further provides a map fusion method, which can be applied to an electronic device, and the method can include the following steps S201 to S202:
  • Step S201 Acquire a first partial map set and a second partial map, the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is different from the coordinate system of the second partial map ;
  • Step S202 According to the first coordinates of the first voxel in each of the first local maps and the second coordinates of the second voxels in the second local map, sequentially combine the first local map set that satisfies the first The first local map of a condition is merged into the current second local map to obtain the target global map.
  • the first partial map with the most overlapping area with the second partial map in the first partial map set is preferentially merged into the second partial map.
  • the first partial map that matches the current second partial map is sequentially fused into the current second partial map; in this way, not only the fusion accuracy can be improved, but also the occurrence of fusion can be avoided. Failure to improve the efficiency of map fusion.
  • the embodiment of the present application further provides a map fusion method, which may be applied to electronic equipment, and the method may include the following steps S301 to S304:
  • Step S301 Obtain a first partial map set and a second partial map.
  • the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is different from the coordinate system of the second partial map. ;
  • Step S302 Determine from the first local map set according to the first coordinates of the first voxel in each of the first local maps and the second coordinates of the second voxels in the second local map A target local map that matches the second local map.
  • the electronic device can implement step S302 through step S402 to step S404 in the following embodiment.
  • step S303 the target local map is merged into the second local map to obtain a merged second local map.
  • Step S304 From the remaining first partial maps in the first partial map set, determine a new target partial map that matches the fused second partial map, so as to convert the new target partial map Fusion into the second partial map after fusion, until each of the first partial maps in the first partial map set is fused into the second partial map to obtain the target global map.
  • the first partial maps included in the first partial map set 20 are the map 21, the map 22, and the map 23.
  • the first partial map set 20 and The first local map matched by the second local map 24 is the map 22.
  • the map 22 is used as the target local map and merged into the second local map 24, and the resulting merged second local map 24 is the map 241;
  • the new target local map that matches the map 241 is determined from the first local map set as the map 21.
  • the map 21 is merged into the map 241 to obtain the fused second local
  • the map is the map 242; finally, the map 23 is merged into the map 242 to obtain the initial global map 243.
  • determining the first local map to be fused (ie the target local map) and fusing the map into the current second local map are performed alternately, that is, the new target local map is Determined on the basis of the currently obtained second partial map; in this way, the electronic device can find from the first partial map set a target partial map that has more overlapping areas with the currently obtained second partial map, thereby reducing fusion Error, improve the accuracy of map fusion.
  • the more overlapping areas between the target local map and the second local map the more accurate the first coordinate conversion relationship (that is, the coordinate conversion relationship described in step S503 of the following embodiment) can be determined.
  • the embodiment of the present application further provides a map fusion method, which can be applied to an electronic device, and the method can include the following steps S401 to S406:
  • Step S401 Acquire a first partial map set and a second partial map, the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is different from the coordinate system of the second partial map ;
  • Step S402 According to the iteration strategy, the first coordinates of each first voxel of the nth first local map in the first local map set are respectively compared with the first coordinates of the multiple second voxels in the second local map. The second coordinate is matched, and the matching result is obtained, and n is an integer greater than 0.
  • the electronic device can determine the second voxel that matches the first voxel by executing step S402.
  • the electronic device can obtain the matching result through step S502 to step S509 in the following embodiment. It should be noted that the electronic device can obtain the matching result between each first partial map and the second partial map through step S402, that is, the nth first partial map is any one in the first partial map set. map.
  • Step S403 in a case where the matching result indicates that the matching is successful, determine the n-th first local map as the target local map;
  • Step S404 in the case that the matching result indicates that the matching fails, continue to compare the first coordinate of each first voxel in the next first local map with the multiple second voxels according to the iterative strategy. Perform matching on the second coordinates of, until the target local map is determined from the first local map set, then step S405 can be entered;
  • step S405 the target local map is merged into the second local map to obtain a merged second local map.
  • the electronic device may obtain the merged second partial map through steps S512 to S513 in the following embodiments.
  • Step S406 Determine a new target local map matching the fused second local map from the remaining first local maps in the first local map set, so as to compare the new target local map Fusion into the second partial map after fusion, until each of the first partial maps in the first partial map set is fused into the second partial map to obtain the target global map.
  • the first coordinates of the first voxel of each first local map in the first local map set are respectively compared with the second coordinates of multiple second voxels in the second local map.
  • Each first partial map can still be better integrated into the second partial map, so that the electronic device can construct a target global map of a larger scene in a shorter period, for example, it can quickly obtain a large airport, Shopping malls, large underground parking lots, and even city-level global maps greatly expand the use of the map.
  • the data volume of the map is greatly reduced, so that multiple electronic devices can construct corresponding first partial maps for different regions, and then can transmit them wirelessly.
  • the embodiment of the present application further provides a map fusion method, which may be applied to electronic equipment, and the method may include the following steps S501 to S514:
  • Step S501 Obtain a first partial map set and a second partial map.
  • the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is different from the coordinate system of the second partial map. ;
  • Step S502 from the plurality of second voxels, select an initial target voxel that matches each first voxel in the n-th first local map; n is an integer greater than 0;
  • the electronic device may select the initial target voxel through steps S602 to S604 in the following embodiments.
  • the second voxel that may match the first voxel in the first local map can be selected through step S502.
  • the initial target voxel may not be an object that truly matches the first voxel. Therefore, it is necessary to further determine whether the initial target voxel is an object that truly matches the first voxel through the following steps S503 to S510.
  • Step S503 according to the first coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel, determine that the nth first local map is relative to the The first coordinate conversion relationship of the second local map.
  • an error function can be constructed according to the first coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel; then, the current The optimal first coordinate conversion relationship.
  • the first coordinates of the first voxel are expressed as p s
  • the second coordinate of the initial target voxel is represented by q s , then the following formula (1) can be listed:
  • E(R, T) is the error function
  • R and T are respectively the rotation matrix and translation vector in the first coordinate conversion relationship to be solved. Then, the optimal solution of R and T in equation (1) can be solved by the least square method.
  • Step S504 Determine a matching error according to the first coordinate conversion relationship, the first coordinate of each first voxel in the nth first local map, and the second coordinate of the corresponding initial target voxel.
  • the matching error can be determined through step S606 and step S607 in the following embodiment.
  • Step S505 counting the number of times of determining matching errors.
  • the number of times the matching error is determined is counted.
  • the number of times currently counted may be cleared to zero.
  • step S506 it is determined whether the number of times is greater than a second threshold; if so, step S507 is executed; otherwise, step S508 is executed.
  • steps S502 to 506 are executed to determine whether the n+1th (ie, the next) first partial map matches the current second partial map.
  • Step S507 Generate a matching result that characterizes the matching failure, and continue to select an initial target voxel that matches each first voxel in the next first local map from the plurality of second voxels, until a characterization match is generated Until the successful matching result, the process proceeds to step S510.
  • Step S508 determine whether the matching error is greater than the first threshold; if yes, return to step S502, reselect the initial target voxel, and re-determine the matching error; otherwise, perform step S509.
  • Step S509 Generate a matching result that characterizes the success of the matching.
  • Step S510 In a case where the matching result indicates that the matching is successful, the n-th first partial map is determined as the target partial map, and then step S512 is entered.
  • the matching error is greater than the first threshold, it means that the currently selected initial target voxel is not an object that matches the first voxel in the current first local map. At this time, it is also necessary to return to step S502, reselect the initial target voxel, and re-execute steps S503 to S504 based on the reselected initial target voxel to re-determine the matching error until the re-determined matching error is less than the first When a threshold is reached, it is determined that the initial target voxel selected in the current iteration is the second voxel that truly matches the first voxel in the current first local map, and step S512 is entered at this time.
  • Step S511 in the case that the matching result indicates that the matching fails, continue according to the iteration strategy to compare the first coordinate of each first voxel in the next first local map with the multiple second voxels respectively.
  • the second coordinates of is matched until the target local map is determined from the first local map set, and step S512 is entered.
  • Step S512 Perform coordinate conversion on the first coordinate of each first voxel in the target local map according to the determined first coordinate conversion relationship when the matching error is less than or equal to the first threshold to obtain The fifth coordinate corresponding to the integral element.
  • the fifth coordinate of the first voxel is the coordinate value in the second local map.
  • Step S513 fusing the target local map and the second local map according to the fifth coordinate of each of the first voxels and the second coordinate of each of the second voxels in the second local map , Get the second partial map after fusion.
  • the electronic device may obtain the fused second partial map through step S616 to step S618 in the following embodiment.
  • Step S514 From the remaining first partial maps in the first partial map set, determine a new target partial map that matches the fused second partial map, so as to convert the new target partial map Fusion into the second partial map after fusion, until each of the first partial maps in the first partial map set is fused into the current second partial map, to obtain the target global map.
  • the embodiment of the present application further provides a map fusion method, which can be applied to an electronic device, and the method can include the following steps S601 to S619:
  • Step S601 Obtain a first partial map set and a second partial map, the first partial map set includes at least one first partial map, and the coordinate system of the first partial map is different from the coordinate system of the second partial map ;
  • Step S602 Obtain a second coordinate conversion relationship between the n-th first local map and the second local map, where n is an integer greater than zero.
  • the second coordinate conversion relationship can be set to an initial value.
  • Step S603 Perform coordinate conversion on the first coordinate of the j-th first voxel in the n-th first local map according to the second coordinate conversion relationship to obtain the j-th first voxel’s Three coordinates, j is an integer greater than 0;
  • Step S604 Match the third coordinates with the second coordinates of the plurality of second voxels to obtain an initial target voxel that matches the j-th first voxel.
  • the distance between the third coordinate of the j-th first voxel and the second coordinate of each second voxel can be determined, and then the distance from the j-th first voxel can be determined.
  • the second voxel closest to the voxel is determined as the initial target voxel, or the second voxel whose distance is less than or equal to the distance threshold is determined as the initial target voxel.
  • the n-th first local map may be any first local map in the first local map set
  • the j-th first voxel may be any first local map in the n-th first local map. Voxel.
  • Step S605 according to the first coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel, determine that the nth first local map is relative to the The first coordinate conversion relationship of the second local map.
  • Step S606 Perform coordinate conversion on the first coordinate of the j-th first voxel in the n-th first local map according to the first coordinate conversion relationship to obtain the j-th first voxel Four coordinates, j is an integer greater than 0.
  • Step S607 Determine the matching error according to the fourth coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel.
  • the electronic device When the electronic device implements step S607, it may first determine the first distance between the fourth coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel (for example, Euclidean distance, etc.); and then determine the matching error according to each of the first distances.
  • the first distance between the fourth coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel for example, Euclidean distance, etc.
  • the average distance between a plurality of first voxels and the matched initial target voxel may be determined as the matching error.
  • the following formula (2) can find the matching error d:
  • Step S608 counting the number of times of determining matching errors
  • Step S609 Determine whether the number of times is greater than a second threshold; if yes, go to step S610; otherwise, go to step S611;
  • Step S610 Generate a matching result that characterizes the failure of the matching, and return to step S602 to continue to obtain the second coordinate conversion relationship between the next first partial map and the second partial map, until a matching result that characterizes the successful matching is generated. Go to step S613;
  • Step S611 Determine whether the matching error is greater than the first threshold; if so, use the first coordinate conversion relationship as the second coordinate conversion relationship, and then return to step S603 to reselect the initial target voxel; otherwise, Step S612 is executed.
  • the matching error is greater than the first threshold, it indicates that the acquired second coordinate conversion relationship does not conform to reality.
  • the obtained initial target voxel is not an object that really matches the first voxel.
  • the first coordinate conversion relationship can be used as the second coordinate conversion relationship, and steps S603 to S610 are executed again. Until the matching error is less than the first threshold, step S612 is executed.
  • Step S612 generating a matching result that characterizes the success of the matching
  • Step S613 In a case where the matching result indicates that the matching is successful, the n-th first partial map is determined as the target partial map, and then step S615 is entered.
  • the matching result characterizing the successful matching is obtained by the electronic device matching the first coordinates of each first voxel in the next first local map with the second coordinates of the multiple second voxels According to the matching result of, the next first local map is determined as the target local map at this time. That is to say, when the current matching result indicates that the matching is successful, the first partial map currently being matched is determined as the target partial map.
  • Step S614 in the case that the matching result indicates that the matching fails, continue according to the iterative strategy to compare the first coordinate of each first voxel in the next first local map with the multiple second voxels.
  • the second coordinates of, are matched until the target local map is determined from the first local map set, and then step S615 is entered.
  • Step S615 Perform coordinate conversion on the first coordinate of each first voxel in the target local map according to the determined first coordinate conversion relationship when the matching error is less than or equal to the first threshold to obtain The fifth coordinate corresponding to the integral element;
  • Step S616 Determine the second distance between the fifth coordinate of the k-th first voxel in the target local map and the second coordinate of each second voxel in the second local map, to Obtain the second distance set, where k is an integer greater than 0.
  • the k-th first voxel is any first voxel in the target local map.
  • Step S617 If there is a target second distance that satisfies a second condition in the second distance set, update the target second distance according to the first coordinate and the fifth coordinate of the k-th first voxel. The second coordinate of the target second voxel corresponding to the distance.
  • the second condition is less than or equal to the third threshold; in other embodiments, the second condition is the minimum distance.
  • the electronic device can update the second coordinate of the target second voxel through step S701 to step S704 in the following embodiment.
  • the fusion of the target local map into the second local map is implemented through step S616 to step 618. That is to say, on the one hand, the electronic device can update the second coordinate of the second voxel according to the first and fifth coordinates of the first voxel that matches with the second voxel; on the other hand, if the second voxel is If there is no second voxel that matches the first voxel in the local map, the first voxel is used as the new second voxel in the second local map, and the fifth coordinate of the second voxel is used as the new voxel.
  • the second coordinate of the second voxel in this way, the problem of information redundancy caused by directly adding the fifth coordinate of the first voxel in the target local map to the second local map can be avoided, and the target can be
  • the second coordinate of the two voxels is smoother, thereby reducing the fusion error, improving the map fusion accuracy, and further improving the positioning accuracy of the visual positioning.
  • Voxel use the fifth coordinate of the k-th first voxel as the second coordinate of the new second voxel; repeat the above step S616 to step S618 to add each The fifth coordinate of the unitary element is merged into the second local map to obtain a merged second local map, and then step S619 is entered.
  • step S618 Use the fifth coordinate of the k-th first voxel as a new element in the second local map.
  • Step S619 Determine a new target local map matching the fused second local map from the remaining first local maps in the first local map set, so as to change the new target local map Fusion into the second partial map after fusion, until each of the first partial maps in the first partial map set is fused into the current second partial map, to obtain the target global map.
  • the electronic device can obtain the target global map by executing steps S602 to S618 for multiple times.
  • the second coordinate of the target second voxel corresponding to the second distance of the target is updated according to the first coordinate and the fifth coordinate of the k-th first voxel
  • the electronic device can be implemented through the following steps S701 to S704:
  • Step S701 Acquire a first distance model corresponding to the target second voxel
  • Step S702 Obtain a historical third distance from the target second voxel to the surface of the object
  • Step S703 Input the Z-axis coordinate value of the first coordinate of the k-th first voxel, the Z-axis coordinate value of the fifth coordinate of the k-th first voxel, and the historical third distance into To the first distance model to update the historical third distance to obtain the updated third distance.
  • the first distance model is as shown in formula (3):
  • W t represents the weight of the target second voxel at the current time t
  • W t-1 represents the weight of the target second voxel at the previous time t-1
  • maxweight is the weight of the target second voxel at the previous time t-1
  • z 1 represents the Z-axis coordinate value of the first coordinate of the k-th first voxel
  • z 5 represents the Z-axis coordinate value of the fifth coordinate of the k-th first voxel
  • maxtruncation and mintruncation represent the maximum and minimum values of the truncation range
  • D t-1 represents the distance from the target second voxel to the surface of the object determined at the previous time t-1, that is, the distance from the target second voxel to the surface of the object An example of the historical third distance; and D t is the updated third distance currently pending.
  • the Z-axis coordinate value z 5 of the fifth coordinate of the k-th first voxel and the Z-axis coordinate value z 1 of the first coordinate of the k-th first voxel are input into the first shown in formula (3).
  • the historical third distance D t-1 can be updated, thereby updating the second coordinate of the target second voxel.
  • step S704 the updated third distance is updated to the Z-axis coordinate value of the second coordinate of the target second voxel.
  • the kth when there is a target second distance that satisfies the second condition in the second distance set, according to the Z-axis coordinate value of the first coordinate of the kth first voxel, the kth The Z-axis coordinate value of the fifth coordinate of the first voxel and the historical third distance from the target second voxel to the surface of the object, update the historical third distance to update the updated third distance to the target second The Z-axis coordinate value of the second coordinate of the voxel. Since the second coordinate of the target second voxel is updated, the historical third distance from the target second voxel to the surface of the object is taken into account, so that the updated second coordinate is smoother, so that better fusion accuracy can be obtained .
  • the physical space covered by the first partial map and the second partial map are different.
  • the physical space covered by the first partial map is room 1
  • the physical space covered by the second partial map is room 2.
  • the construction process of the first partial map and the second partial map are similar.
  • the map construction process may include the following steps S801 to S803:
  • Step S801 Perform quantization processing on the size of the specific physical space to obtain the initial coordinates of a plurality of the second voxels.
  • the specific physical space refers to the physical scene covered by the second partial map.
  • the specific physical space is a certain room in a certain building.
  • the second voxel is actually a cube with a specific size, that is, the smallest unit in the specific physical space.
  • the specific physical space is regarded as a cube 301 with a certain size, and then the cube is quantized with the second voxel 302 as the unit to obtain multiple second voxels; in a specific coordinate system (
  • the world coordinate system is the reference coordinate system, and the initial coordinates of each second voxel are determined.
  • the size of the particular physical space is 512 ⁇ 512 ⁇ 512m 3, a second voxel size of 1 ⁇ 1 ⁇ 1m 3, then in 1 ⁇ 1 ⁇ 1m 3 of the second size in units of voxels, 512
  • the physical space of ⁇ 512 ⁇ 512m 3 is quantized, and the initial coordinates of 512 ⁇ 512 ⁇ 512 second voxels can be obtained.
  • the quantization process includes quantizing the size of a specific physical space and determining the initial coordinates of each second voxel.
  • step S802 the initial coordinates of each second voxel are updated according to the plurality of sample image pairs collected in the specific physical space by the image acquisition module to obtain the The second coordinate.
  • the two-dimensional sample image refers to a plane image that does not contain depth information
  • the two-dimensional sample image is an RGB image.
  • the image acquisition module can acquire a two-dimensional sample image through the first camera installed by itself.
  • the depth sample image refers to an image containing depth information.
  • the image acquisition module can acquire the depth sample image through a second camera module (for example, a binocular camera) installed by itself.
  • the electronic device can implement step S802 through step S902 to step S904 in the following embodiment.
  • Step S803 Construct the second local map according to the second coordinates of each of the second voxels. That is, the second local map includes the second coordinates of each second voxel, but does not include the image characteristics of the pixels, which can reduce the amount of data of the second local map and ensure the privacy of the second local map. It has better privacy protection effect.
  • the image acquisition module collects sample images at different times or at different locations, the shooting scenes thereof have overlapping areas. That is to say, different sample images include part of the same content, which causes a large amount of redundant information to be introduced when constructing the second partial map based on these sample images.
  • the same location point in the physical space may be represented by multiple pixels with the same or similar coordinates in the second local map, which greatly increases the data volume of the second local map and affects the construction process of the second local map. It is also unfavorable for obtaining high-precision map fusion results.
  • the amount of data is too large, which is not conducive to crowdsourcing multiple electronic devices to send to electronic devices that realize map fusion, which limits the application scenarios of map fusion and reduces the cost of map fusion. effectiveness.
  • the second local map is constructed in the form of second voxels, that is, the initial coordinates of each second voxel are updated through the collected multiple sample image pairs, so as to obtain A second local map of the second coordinate of a second voxel.
  • This method of constructing the second local map is equivalent to fusing the coordinates of all the pixels covered by the second voxel into one coordinate; in this way, it is solved that the same position in the physical space is divided by multiple pixels with the same or The similar coordinates indicate the above-mentioned problems brought about in the second local map, and a large amount of redundant information is removed.
  • the embodiment of the present application provides a second partial map construction process.
  • the process may include the following steps S901 to S905:
  • Step S901 Perform quantization processing on the size of a specific physical space to obtain the initial coordinates of a plurality of said second voxels;
  • Step S902 controlling the image acquisition module to acquire the sample image pair according to a preset frame rate
  • the image acquisition module can collect sample image pairs while moving.
  • the collection of sample image pairs can be realized by a robot with an image collection module.
  • Step S903 According to the first sample image pair collected by the image collection module at the current moment and the second sample image pair collected at the historical moment, the initial coordinates of each second voxel are updated.
  • the electronic device can implement step S903 through step S113 to step S115 in the following embodiment.
  • Step S904 continue to update the current coordinates of each second voxel according to the first sample image pair and the third sample image pair acquired by the image acquisition module at the next moment, until the end of the sample image acquisition And use the current coordinates of the second voxel as the second coordinates.
  • step S903 and step S904 the electronic device can update the current coordinates of each second voxel in real time according to the pair of sample images collected at the current time and the pair of sample images collected at the historical time by the image collection module until Until the end of the image acquisition task of the image acquisition module, the coordinate of each second voxel currently updated is used as the second coordinate corresponding to the second voxel.
  • Step S905 Construct the second local map according to the second coordinates of each of the second voxels.
  • the current coordinates of each second voxel are updated according to the collected sample image pairs. That is, the electronic device continuously uses the sample image pair collected by the image collection module at the current moment and the sample image pair collected at a historical moment (for example, the previous moment) to update the current coordinates of each second voxel. Since the two sample images obtained at the front and back moments have more overlapping areas, this method eliminates the need for the electronic device to find the two sample image pairs with the most overlapping areas from the multiple sample image pairs, and then use this method based on this. The two sample image pairs update the current coordinates of each second voxel; in this way, the efficiency of map construction can be greatly improved.
  • the embodiment of the present application provides a second partial map construction process.
  • the process may include the following steps S111 to S117:
  • Step S111 performing quantization processing on the size of a specific physical space to obtain the initial coordinates of a plurality of said second voxels;
  • Step S112 controlling the image acquisition module to acquire the sample image pair according to a preset frame rate
  • Step S113 Determine the current camera coordinates of each second voxel according to the first sample image pair and the second sample image pair.
  • step S113 it can determine the current conversion relationship between the camera coordinate system and the coordinate system where the second local map is located according to the first sample image pair and the second sample image pair; The initial coordinates of the second voxel are converted to the current camera coordinates.
  • the electronic device may be based on the image characteristics of the pixels of the two-dimensional sample image in the first sample image pair, the depth value of the pixel points of the depth sample image in the first sample image pair, and the second sample image Determine the current transformation relationship between the image feature of the pixel of the two-dimensional sample image and the depth value of the pixel of the depth sample image in the second sample image. Based on this, the initial coordinates of the second voxel are converted to the current camera coordinates according to the following formula (4).
  • (x c , y c , z c ) represents the camera coordinates
  • the transformation relationship includes the rotation matrix R and the translation vector T
  • (x w , y w , z w ) represents the initial coordinates
  • Step S114 Obtain a depth value corresponding to the current pixel coordinate of each second voxel from the depth sample image of the first sample image pair.
  • step S114 it can convert the current camera coordinates of each second voxel to the current pixel coordinates according to the internal parameter matrix of the image acquisition module; from the depth sample image of the first sample image pair, obtain and The depth value corresponding to the current pixel coordinate of each second voxel.
  • Step S115 according to the current camera coordinates of each second voxel and the depth value corresponding to the current pixel coordinates of each second voxel, update the initial coordinates corresponding to the second voxel.
  • step S115 it may obtain the second distance model corresponding to the m-th second voxel among the plurality of second voxels; obtain the fourth historical distance model from the m-th second voxel to the surface of the object.
  • the historical fourth distance is updated to obtain the updated fourth distance; the updated fourth distance corresponding to each second voxel is updated to be the same as the second
  • the Z-axis coordinate value in the initial coordinate corresponding to the voxel is used to update the initial coordinate corresponding to the second voxel.
  • the second distance model corresponding to the second voxel is shown in the following formula (5):
  • W t represents the weight of the second voxel at the current time t
  • W t-1 represents the weight of the second voxel at the previous time t-1
  • maxweight is the weight of the second voxel at the previous time t-1
  • D t (u,v) represents the depth value corresponding to the current pixel coordinates of the second voxel
  • z c represents the Z-axis coordinate value of the current camera coordinates of the second voxel
  • maxtruncation And mintruncation respectively represent the maximum and minimum of the truncation range
  • D t-1 represents the distance from the second voxel to the surface of the object determined at the previous time t-1, that is, the historical first voxel from the target second voxel to the surface of the object.
  • D t is the updated fourth distance currently to be found.
  • the Z-axis coordinate value z c of the current camera coordinate of the second voxel, the depth value D t (u, v) corresponding to the current pixel coordinate of the second voxel, and the historical fourth distance are input into formula (5)
  • the historical fourth distance D t-1 can be updated, thereby updating the initial coordinates of the second voxel.
  • Step S116 Continue to update the current coordinates of each second voxel according to the first sample image pair and the third sample image pair acquired by the image acquisition module at the next moment, until the end of the sample image acquisition And use the current coordinates of the second voxel as the second coordinates.
  • the electronic device continuously updates the current coordinates of each of the second voxels by performing steps S113 to S115 similar to those of step S115.
  • Step S117 construct the second local map according to the second coordinates of each of the second voxels.
  • the related technology is implemented as follows: obtain a local map, the local map contains local coordinate system information and scanned map point cloud information; according to the normal phase distribution of the points in the scanned map point cloud Frequency to obtain the direction histogram; according to the orthogonal projection, the scanned map point cloud is weighted and projected from the discrete direction to the line to obtain the projection histogram; the histogram correlation is calculated, and the similar second local map and the first local map are quickly matched Calculate the rotation relationship between the second local map and the direction histogram corresponding to the first local map; calculate the translation relationship between the second local map and the projection histogram corresponding to the first local map; The second partial map and the first partial map are synthesized according to the rotation relationship and the translation relationship.
  • the core technical points of the scheme are: first, calculate the normal phase distribution frequency of the map point cloud to obtain the direction histogram; second, weight the projection to the line to obtain the projection histogram; third, calculate the rotation relationship and the translation relationship.
  • the rotation relationship and the translation relationship are obtained by calculating the histogram correlation of the two partial maps.
  • the overlapped area of the two partial maps is not much, the correlation between the two partial maps is low, and the matching is robust. Lower, resulting in larger errors in the calculated rotation relationship and translation relationship, thereby reducing the accuracy of map fusion;
  • the calculation of the rotation relationship and the translation relationship relies on the normal phase characteristics of the point cloud, and the accuracy of this feature is not high, and errors are prone to occur, resulting in low accuracy of the map rotation and translation obtained by the calculation.
  • the embodiments of the application implement an indoor map update technology based on dense point clouds, which can help users create indoor maps in the form of dense point clouds, and achieve the goals of multiple local map fusion and map update.
  • This solution can support the needs of multiple indoor local map synthesis with repeated areas.
  • the collection of local maps can be collected disorderly in the form of crowdsourcing.
  • the solution can support daily tasks such as map fusion, map update, and multi-person map building.
  • the map update has high accuracy and strong robustness.
  • the map construction part mainly collects RGB image information through a monocular camera, extracts image features for visual tracking, and uses a three-dimensional vision sensor (such as TOF, structured light, etc.) to collect depth information to build a dense point cloud.
  • a three-dimensional vision sensor such as TOF, structured light, etc.
  • the specific technical steps of constructing a local map in the form of a dense point cloud may include steps S11 to S15:
  • Step S11 using a monocular camera to collect RGB images at a fixed frame rate
  • Step S12 using a three-dimensional vision sensor to collect depth images at a fixed frame rate
  • Step S13 aligning the RGB image and the depth image, including time stamp alignment and pixel alignment;
  • Step S14 extracting feature information in the RGB image and depth information in the depth image in real time during the acquisition process to perform visual tracking and motion estimation on the image acquisition module, and determine the current conversion relationship between the camera coordinate system and the world coordinate system;
  • step S15 a local map in the form of a dense point cloud is constructed in the form of voxels according to the obtained multiple depth images and the TSDF algorithm.
  • the depth image is also called the distance image, which refers to the image in which the distance from the image acquisition module to each point in the shooting scene is used as the pixel value.
  • the depth image intuitively reflects the geometric shape of the visible surface of the object.
  • each pixel represents the distance from the object at the specific coordinate to the camera plane in the field of view of the 3D vision sensor.
  • a local map in the form of a dense point cloud is constructed in the form of voxels.
  • Step S151 first obtain the coordinate V (x g , y g , z g ) of the voxel in the global coordinate system, and then convert it from the global coordinate according to the transformation matrix obtained by the motion tracking (that is, the current transformation relationship output in step S14) Is the camera coordinate V(x c ,y c ,z c );
  • the pixel points (u, v) in the depth image can be converted to camera coordinates V (x c , y c , z c ) through the following formula (6):
  • R and T are respectively the rotation matrix and translation vector in the current conversion relationship.
  • Step S152 as shown in the following formula (7), convert the camera coordinates V (x c , y c , z c ) into image coordinates according to the camera's internal parameter matrix to obtain an image coordinate (u, v);
  • z c represents the z-axis value of the camera coordinate, that is, the depth value corresponding to the pixel point (u, v);
  • the Z axis of the camera coordinate system is the optical axis of the lens
  • the depth value of the pixel point (u, v) is the Z axis coordinate value z c of the camera coordinate of the pixel point.
  • Step S153 If the depth value D(u,v) of the depth image of the lth frame at the image coordinates (u,v) is not 0, compare D(u,v) with the voxel camera coordinates V(x,y) If the size of z in z) is D(u,v) ⁇ z, it means that the voxel is farther from the camera and is inside the fusion surface; otherwise, it means that the voxel is closer to the camera and outside the fusion surface;
  • step S154 the distance value D l and the weight value W l in this voxel are updated according to the result of step S153, and the update formula is shown in the following formula (8):
  • W l (x,y,z) is the weight of the voxel in the global data cube of the current frame
  • W l-1 (x,y,z) is the weight of the voxel in the global data cube of the previous frame
  • maxweight is The maximum weight among the weights of all voxels in the global data cube in the previous frame can be set to 1.
  • D l (x,y,z) the distance between the voxels in the current global data cube and the surface of the object
  • D l-1 ( x, y, z) is the distance from the voxel of the last frame of global data cube to the surface of the object
  • d l (x, y, z) is the distance from the voxel in the global data cube to the surface of the object calculated according to the depth data of the current frame
  • Z represents the z-axis coordinates of the voxel in the camera coordinate system
  • D l (u, v) represents the depth value of the current frame depth image at the pixel point (u, v)
  • [mintruncation, maxtruncation] is the truncation range, which It will affect the fineness of the reconstruction results.
  • a local map based on a dense point cloud can be constructed.
  • the map update part mainly uses the Iterative Closest Point (ICP) algorithm to match the dense point clouds of the two local maps, so as to solve the current first local map relative to the second local map. Accurate the pose (that is, the first coordinate conversion relationship), and then use the TSDF algorithm to fuse the first local map into the second local map in the form of voxels.
  • ICP Iterative Closest Point
  • Step S21 load the constructed second local map and the first local map, and use the second local map coordinate system as the global coordinate system;
  • step S22 the dense point cloud in the first local map and the second local map are matched by the ICP algorithm to obtain the accurate pose of the current first local map in the global coordinate system (that is, the second local map);
  • Step S23 using the TSDF algorithm to fuse the first local map into the second local map in the form of voxels
  • step S24 steps S22 to S23 are repeatedly executed, and other partial maps are merged into the second partial map.
  • the ICP algorithm is essentially an optimal registration method based on the least squares method.
  • the algorithm repeatedly selects the corresponding point pairs and calculates the optimal rigid body transformation until the convergence accuracy requirements of the correct registration are met.
  • the basic principle of the ICP algorithm is: are to be matched in the first and second local map local map Q P in accordance with certain constraints, find the nearest point (p i, q i); then calculate the optimal Rotate R and translate T to minimize the error function.
  • the error function E(R,T) is shown in the following formula (9):
  • Step S221 to Step S226 are as follows:
  • Step S221 Take a point set p i ⁇ P in the current first local map P;
  • Step S222 Find out the corresponding point set q i ⁇ Q in the second local map Q, such that
  • min;
  • Step S223 Calculate the rotation matrix R and the translation matrix T to minimize the error function
  • Step S225 the calculation of p 'i and the average distance corresponding to a set of points q i
  • step S226 if d is less than the given threshold d TH or greater than the threshold of the number of iterations, the iterative calculation is stopped, and the algorithm outputs the current rotation matrix R and translation matrix T; otherwise, skip back to step S222.
  • step S15 For the fusion of the first partial map into the second partial map in the form of voxels using the TSDF algorithm in step S23, step S15 may be referred to.
  • step S21 to step S23 the purpose of updating and fusing multiple pre-built dense point cloud local maps can be achieved.
  • the map update scheme has the advantages of high fusion accuracy, strong resistance to environmental interference and strong robustness.
  • the depth information is obtained by using the three-dimensional vision sensor, and the dense point cloud is obtained by using the depth information to construct the map, which will not be affected by the illumination change, and the map update is more robust;
  • the map construction method and map fusion method provided by the embodiments of the application can obtain the following beneficial effects: 1. A high-precision and high-robust matching algorithm is adopted, and the result of map fusion improves the fusion accuracy compared with other map fusion methods. 2.
  • the stored map form is a dense point cloud and does not require visual feature descriptor information. Compared with other methods, the map size has a certain degree of compression; 3.
  • the constructed map form is a dense point cloud map and does not need to be stored The RGB information of the environment, so the privacy of the map is better.
  • a three-dimensional vision sensor is mainly used to collect depth information to construct a local map, and a high-precision and high-robust point cloud matching algorithm is combined to achieve the indoor map update purpose.
  • map construction depth image information is collected by using a three-dimensional vision sensor and stored as an offline map in the form of a dense point cloud.
  • the ICP algorithm is used to match the first partial map and the second partial map, and the conversion relationship between the first partial map and the second partial map is accurately calculated.
  • the TSDF algorithm combined with the TSDF algorithm to fuse multiple local maps, a set of map update schemes with high fusion accuracy and strong robustness are formed.
  • the solution supports map fusion in multi-person mapping scenarios and crowdsourced map updates. While ensuring the stability of map fusion, it also improves the efficiency of building local maps.
  • the embodiment of the present application provides a map fusion device, which includes each module included and each unit included in each module, which can be implemented by a processor in an electronic device; of course, it can also be implemented by Specific logic circuit implementation; in the implementation process, the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA), etc.
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA), etc.
  • FIG. 4 is a schematic diagram of the composition structure of a map fusion device according to an embodiment of the application.
  • the device 400 includes a map acquisition module 401 and a map fusion module 402, wherein: the map acquisition module 401 is configured to acquire a first partial map A set and a second local map, the first local map set includes at least one first local map, and the coordinate system of the first local map is different from the coordinate system of the second local map; the map fusion module 402 is configured to According to the first coordinate of the first voxel in each of the first local map and the second coordinate of the second voxel in the second local map, each of the first local map and the second local map Maps are fused to obtain a global map of the target; wherein the second coordinates of the second voxel are obtained by updating the initial coordinates of the second voxel according to a plurality of sample image pairs, and the sample image pairs include Two-dimensional sample image and depth sample image.
  • the map fusion module 402 is configured to sequentially combine the first coordinates of the first voxel in each of the first local map and the second coordinates of the second voxel in the second local map.
  • the first local map that satisfies the first condition in the first local map set is merged into the current second local map to obtain the target global map.
  • the map fusion module 402 includes: a determining sub-module configured to determine the coordinates of the first voxel in each of the first local map and the second voxel in the second local map.
  • the second coordinate is to determine a target local map matching the second local map from the first local map set; a fusion sub-module configured to merge the target local map to the second local map To obtain a fused second local map; from the remaining first local maps in the first local map set, determine a new target local map that matches the fused second local map to Fusing the new target local map into the fused second local map until each of the first local maps in the first local map set is fused into the current second local map, Obtain the global map of the target.
  • the determining submodule includes: a matching unit configured to, according to an iteration strategy, calculate the first coordinate of each first voxel of the nth first local map in the first local map set Respectively matching with the second coordinates of a plurality of second voxels in the second local map to obtain a matching result, where n is an integer greater than 0; the determining unit is configured to, when the matching result indicates that the matching is successful, Determine the n-th first local map as the target local map; in the case where the matching result indicates that the matching fails, continue according to the iterative strategy to change each first body in the next first local map The first coordinates of the voxels are matched with the second coordinates of the plurality of second voxels, respectively, until the target local map is determined from the first local map set.
  • a matching unit configured to, according to an iteration strategy, calculate the first coordinate of each first voxel of the nth first local map in the first local map set Respectively matching with the second coordinates of
  • the matching unit is configured to select an initial target voxel that matches each first voxel in the nth first local map from the plurality of second voxels ; According to the first coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel, determine that the nth first local map is relative to the second The first coordinate conversion relationship of the local map; according to the first coordinate conversion relationship, the first coordinate of each first voxel in the nth first local map, and the second coordinate of the corresponding initial target voxel, Determine the matching error; if the matching error is greater than the first threshold, reselect the initial target voxel, and re-determine the matching error; if the matching error is less than or equal to the first threshold, generate a matching result that characterizes the successful matching.
  • the matching unit is further configured to: if it is determined that the number of matching errors is greater than the second threshold, generate a matching result that characterizes the failure of the matching, and continue to select and download from the plurality of second voxels.
  • the initial target voxel matched by each first voxel in a first local map until a matching result that characterizes the successful matching is generated.
  • the matching unit is configured to: obtain a second coordinate conversion relationship of the n-th first local map with respect to the second local map; Perform coordinate conversion on the first coordinate of the jth first voxel in the nth first local map to obtain the third coordinate of the jth first voxel, where j is an integer greater than 0; The three coordinates are matched with the second coordinates of the plurality of second voxels to obtain an initial target voxel that matches the j-th first voxel.
  • the matching unit is configured to: perform coordinate conversion on the first coordinate of the j-th first voxel in the n-th first local map according to the first coordinate conversion relationship to obtain The fourth coordinate of the jth first voxel, j is an integer greater than 0; according to the fourth coordinate of each first voxel in the nth first local map and the corresponding initial target voxel The second coordinate determines the matching error.
  • the matching unit is configured to determine the fourth coordinate of each first voxel in the nth first local map and the second coordinate of the corresponding initial target voxel. A distance; according to each of the first distances, the matching error is determined.
  • the matching unit is configured to: if the matching error is greater than the first threshold, use the first coordinate conversion relationship as the second coordinate conversion relationship, and reselect an initial target voxel.
  • the fusion sub-module includes: a coordinate conversion unit configured to, when the matching result indicates that the matching is successful, according to the first coordinate when the determined matching error is less than or equal to the first threshold
  • the conversion relationship is to perform coordinate conversion on the first coordinate of each first voxel in the target local map to obtain the fifth coordinate corresponding to the first voxel
  • the map fusion unit is configured to perform coordinate conversion according to each of the first voxels
  • the fifth coordinate of the unitary pixel and the second coordinate of each second voxel in the second local map are fused, and the target local map and the second local map are merged to obtain a fused second local map .
  • the map fusion unit is configured to determine that the fifth coordinate of the k-th first voxel in the target local map is different from each of the second voxels in the second local map.
  • Update the second coordinate of the target second voxel corresponding to the second distance of the target k is an integer greater than 0; the second condition is not satisfied in the second distance set In the case of the target second distance of the k-th first voxel as the new second voxel in the second local map, and the fifth coordinate of the k-th first voxel as The second coordinate of the new second voxel.
  • the map fusion unit is configured to: obtain a first distance model corresponding to the target second voxel; obtain the historical third distance from the target second voxel to the surface of the object; The Z-axis coordinate value of the first coordinate of the k-th first voxel, the Z-axis coordinate value of the fifth coordinate of the k-th first voxel, and the historical third distance are input to the first In the distance model, the historical third distance is updated to obtain the updated third distance; the updated third distance is updated to the Z-axis coordinate value of the second coordinate of the target second voxel.
  • the device 400 further includes a map construction module, and the map construction module is configured to: quantify the size of a specific physical space to obtain the initial coordinates of a plurality of the second voxels; According to the multiple sample image pairs collected by the collection module in the specific physical space, the initial coordinates of each second voxel are updated to obtain the second coordinates of each second voxel; The second coordinate of each second voxel is used to construct the second local map.
  • the map construction module is configured to: control the image acquisition module to acquire the sample image pair at a preset frame rate; according to the first sample acquired by the image acquisition module at the current moment Image pair and the second sample image pair collected at the historical moment, update the initial coordinates of each second voxel; according to the first sample image pair and the third sample image pair collected by the image acquisition module at the next moment For sample image pairs, continue to update the current coordinates of each second voxel, and use the current coordinates of the second voxel as the second coordinates until the end of sample image collection.
  • the map construction module is configured to: determine the current camera coordinates of each second voxel according to the first sample image pair and the second sample image pair; In the depth sample image of the first sample image pair, the depth value corresponding to the current pixel coordinate of each second voxel is obtained; according to the current camera coordinate of each second voxel and each of the The depth value corresponding to the current pixel coordinate of the second voxel is updated with the initial coordinate corresponding to the second voxel.
  • the map construction module is configured to: obtain a second distance model corresponding to the m-th second voxel among the plurality of second voxels; and obtain the m-th second voxel to The historical fourth distance of the surface of the object; the Z-axis coordinate value of the current camera coordinate of the mth second voxel, the depth value corresponding to the current pixel coordinate of the mth second voxel, and the history
  • the fourth distance is input into the second distance model to update the historical fourth distance to obtain the updated fourth distance; update the updated fourth distance corresponding to each second voxel Is the Z-axis coordinate value in the initial coordinate corresponding to the second voxel, so as to realize the update of the initial coordinate corresponding to the second voxel.
  • the map construction module is configured to: determine the camera coordinate system relative to the coordinate system where the second local map is located according to the first sample image pair and the second sample image pair.
  • the current conversion relationship according to the current conversion relationship, the initial coordinates of each of the second voxels are converted into current camera coordinates.
  • the map construction module is configured to: convert the current camera coordinates of each second voxel into current pixel coordinates according to the internal parameter matrix of the image acquisition module; In the depth sample image of the sample image pair, the depth value corresponding to the current pixel coordinate of each second voxel is acquired.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, a server, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
  • FIG. 5 is a schematic diagram of a hardware entity of the electronic device according to an embodiment of the application.
  • the hardware entity of the electronic device 500 includes: a memory 501 and a processor. 502.
  • the memory 501 stores a computer program that can be run on the processor 502, and the processor 502 implements the steps in the map fusion method provided in the foregoing embodiment when the processor 502 executes the program.
  • the memory 501 is configured to store instructions and applications executable by the processor 502, and can also cache data to be processed or processed by the processor 502 and each module in the electronic device 500 (for example, image data, audio data, voice communication data, and Video communication data) can be implemented by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • FLASH flash memory
  • RAM Random Access Memory
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the map fusion method provided in the foregoing embodiments are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, a server, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Generation (AREA)

Abstract

本申请实施例公开了地图融合方法及装置、设备、存储介质;其中,所述方法包括:获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。

Description

地图融合方法及装置、设备、存储介质
相关申请的交叉引用
本申请基于申请号为201910922127.1、申请日为2019年09月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本申请。
技术领域
本申请实施例涉及电子技术,涉及但不限于地图融合方法及装置、设备、存储介质。
背景技术
目前,通过视觉信息可以建立室内环境地图,在构建室内环境地图的过程中不可避免地遇到地图融合的需求。比如,在多次数据采集后的地图构建、多人或多机器协作完成地图构建等应用场景中,都需要将在各个局部区域对应的多个局部地图合成为一个全局地图。然而,合成的全局地图却存在较大的融合误差,在合成后多个局部地图之间的一致性较低。
发明内容
有鉴于此,本申请实施例提供的地图融合方法及装置、设备、存储介质,能够提高地图融合精度,减小融合误差,从而改善融合后多个局部地图之间的一致性较低的问题。本申请实施例的技术方案是这样实现的:
本申请实施例提供的地图融合方法,包括:获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。
本申请实施例提供的地图融合装置,包括:地图获取模块,配置为获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;地图融合模块,配置为根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。
本申请实施例提供的电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例提供的所述地图融合方法中的步骤。
本申请实施例提供的计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本申请实施例提供的所述地图融合方法中的步骤。
在本申请实施例中,所述第二局部地图中第二体素的第二坐标是根据多个样本图像对,更新(即修正)所述第二体素的初始坐标而获得的;相比于直接将二维样本图像中像素点的 局部坐标作为所述第二局部地图的内容,可以提高第二局部地图中坐标的精度,进而在地图融合时获得较高的地图融合精度,使得融合后得到的目标全局地图中每一坐标值更加平滑。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并于说明书一起用于说明本申请的技术方案。
图1为本申请实施例地图融合方法的实现流程示意图;
图2为本申请实施例将多个第一局部地图融合到第二局部地图的示意图;
图3为本申请实施例对特定物理空间进行量化的实现示意图;
图4为本申请实施例地图融合装置的组成结构示意图;
图5为本申请实施例电子设备的一种硬件实体示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
需要指出,本申请实施例所涉及的术语“第一\第二\第三”用于区别不同的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
本申请实施例提供一种地图融合方法,所述方法可以应用于电子设备,所述电子设备可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机、服务器等具有信息处理能力的设备。所述地图融合方法所实现的功能可以通过所述电子设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,所述电子设备至少包括处理器和存储介质。
图1为本申请实施例地图融合方法的实现流程示意图,如图1所示,所述方法可以包括以下步骤S101至步骤S102:
步骤S101,获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同。
在一些实施例中,第一局部地图集合包括一个或多个不同的第一局部地图。多个不同的第一局部地图可以是不同的电子设备或者同一电子设备分别在不同区域通过内置的图像采集模组所采集多个样本图像对而构建的局部地图。对于第一局部地图和第二局部地图的构建过程,可以通过如下实施例中步骤S801至步骤S803实现。多个不同的电子设备可以以众包的形式将构建的第一局部地图发送给执行地图融合方法的电子设备。对于第二局部地图的坐标系的类型不做限定,该坐标系可以为适合某种应用场景下的坐标系,例如在视觉定位中,第二局部地图的坐标系为世界坐标系。当然,该坐标系还可以是自定义的坐标系。
步骤S102,根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初 始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。
需要说明的是,第一体素不是特指第一局部地图中的某一特定体素,第一局部地图中的每一体素均称为第一体素。同样地,第二体素也不是特指第二局部地图中的某一特定体素,第二局部地图中的每一体素均可称为第二体素。在本申请实施例中,“第一体素”的“第一”和“第二体素”的“第二”是为了区分这两种体素是不同局部地图中的体素,而没有其他特别的含义。类似地,第一坐标和第二坐标也不是特指某一坐标,而是为了区分是不同局部地图中体素的坐标。
可以理解地,所谓局部地图和全局地图,“局部”和“全局”是相对而言的,前者是指特定物理空间内的部分空间,而后者则是指该物理空间整体。以该特定物理空间为一栋写字楼为例,局部地图是指某一楼层或者某一楼层的某一房间对应的地图,而全局地图则是指该写字楼对应的地图。
电子设备可以通过如下实施例中的步骤S202、或者步骤S302至步骤S304、或者步骤S616至步骤S619等来实现步骤S102。
在本申请实施例中,第二局部地图中第二体素的第二坐标是根据多个样本图像对,多次更新第二体素的初始坐标而获得的,而不是在将样本图像对中的像素点的相机坐标转换为第二局部地图所在的坐标系下之后,直接作为第二局部地图的内容;也就是说,第二局部地图的构建过程实际上是通过多个样本图像对不断地对第二体素的坐标进行更新(即修正)的过程;如此,能够大大提高第二局部地图中第二体素的坐标精度,这样在基于第二局部地图,将第一局部地图融合进来时,能够减小融合误差,从而获得较高的地图融合精度。
本申请实施例再提供一种地图融合方法,所述方法可以应用于电子设备,所述方法可以包括以下步骤S201至步骤S202:
步骤S201,获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
步骤S202,根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,依次将所述第一局部地图集合中满足第一条件的第一局部地图融合至当前第二局部地图中,以得到所述目标全局地图。
在实现时,优先将第一局部地图集合中与第二局部地图的重叠区域最多的第一局部地图,融合至第二局部地图中。例如,通过如下实施例的步骤S302至步骤S304,依次将与当前第二局部地图相匹配的第一局部地图融合至当前第二局部地图中;这样,不仅可以提高融合精度,还可以避免出现融合失败的情况,从而提高地图融合效率。
本申请实施例再提供一种地图融合方法,所述方法可以应用于电子设备,所述方法可以包括以下步骤S301至步骤S304:
步骤S301,获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
步骤S302,根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,从所述第一局部地图集合中,确定出与所述第二局部地图相匹配的目标局部地图。
电子设备可以通过如下实施例的步骤S402至步骤S404实现步骤S302。
步骤S303,将所述目标局部地图融合至所述第二局部地图中,得到融合后的第二局部地图。
步骤S304,从所述第一局部地图集合中剩余的第一局部地图中,确定出与所述融合后的第二局部地图相匹配的新的目标局部地图,以将所述新的目标局部地图融合至所述融合后的第二局部地图中,直到所述第一局部地图集合中每一所述第一局部地图均被融合至所述第二局部地图中为止,得到所述目标全局地图。
举例来说,如图2所示,假设第一局部地图集合20包括的第一局部地图为地图21、地图22和地图23,在进行第一次地图融合时,第一局部地图集合20中与第二局部地图24相匹配的第一局部地图为地图22,此时将地图22作为目标局部地图,融合到第二局部地图24中,得到的融合后的第二局部地图24为地图241;在进行第二次地图融合时,从第一局部地图集合中确定与地图241相匹配的新的目标局部地图为地图21,此时将地图21融合到地图241中,得到的融合后的第二局部地图为地图242;最后,将地图23融合到地图242中,得到初始全局地图243。
在本申请实施例中,确定待融合的第一局部地图(即目标局部地图)和将该地图融合至当前的第二局部地图中,这两个动作交替进行,即新的目标局部地图是在当前得到的第二局部地图的基础上所确定的;这样,使得电子设备能够从第一局部地图集合中找到与当前得到的第二局部地图具有更多重叠区域的目标局部地图,从而减小融合误差,提高地图融合精度。这是因为:目标局部地图与第二局部地图之间的重叠区域越多,越能够确定出更加准确的第一坐标转换关系(即如下实施例的步骤S503中所述的坐标转换关系),从而使得确定的第一体素的第五坐标(即如下实施例的步骤S512中所述的第五坐标)更加精确,进而能够减小融合误差,提高地图融合的精度。
本申请实施例再提供一种地图融合方法,所述方法可以应用于电子设备,所述方法可以包括以下步骤S401至步骤S406:
步骤S401,获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
步骤S402,根据迭代策略,将所述第一局部地图集合中第n个第一局部地图的每一第一体素的第一坐标分别与所述第二局部地图中多个第二体素的第二坐标进行匹配,得到匹配结果,n为大于0的整数。
可以理解地,电子设备通过执行步骤S402即可确定出与第一体素相匹配的第二体素。在实现时,电子设备可以通过如下实施例的步骤S502至步骤S509获得所述匹配结果。需要说明的是,电子设备通过步骤S402可以得到每一第一局部地图与第二局部地图之间的匹配结果,也就是说,第n个第一局部地图为第一局部地图集合中的任一地图。
步骤S403,在所述匹配结果表征匹配成功的情况下,将所述第n个第一局部地图确定为所述目标局部地图;
步骤S404,在所述匹配结果表征匹配失败的情况下,继续根据所述迭代策略,将下一个第一局部地图中每一第一体素的第一坐标分别与所述多个第二体素的第二坐标进行匹配,直到从所述第一局部地图集合中确定出所述目标局部地图为止,即可进入步骤S405;
步骤S405,将所述目标局部地图融合至所述第二局部地图中,得到融合后的第二局部地图。
在一些实施例中,电子设备可以通过如下实施例的步骤S512至步骤S513获得融合后的第二局部地图。
步骤S406,从所述第一局部地图集合中剩余的第一局部地图中,确定出与所述融合后的第二局部地图相匹配的新的目标局部地图,以将所述新的目标局部地图融合至所述融合后的第二局部地图中,直到所述第一局部地图集合中每一所述第一局部地图均被融合至所述第二局部地图中为止,得到所述目标全局地图。
在本申请实施例中,根据迭代策略,将第一局部地图集合中每一第一局部地图的第一体素的第一坐标分别与第二局部地图中多个第二体素的第二坐标进行匹配,即可从第一局部地图集合中找出与第二局部地图相匹配的目标局部地图;如此,使得电子设备在每一第一局部地图和第二局部地图均没有图像特征的情况下,仍然可以将每一第一局部地图较好地融合至第二局部地图中,从而使得电子设备能够在较短的周期内构建出更大场景的目标全局地图,例如能够快速地得到大型机场、商场、大型地下停车场、甚至城市级别的全局地图,这样就 大大扩展了该地图的使用场景。
可以理解地,由于第一局部地图中不含图像特征,这样就大大降低了地图的数据量,从而使得多个电子设备能够分别对不同的区域构建对应的第一局部地图之后,能够通过无线传输的方式将构建的第一局部地图(例如以众包的形式)发送给实现地图融合的电子设备,进而提高了目标全局地图的构建效率,缩短了目标全局地图的构建周期。
本申请实施例再提供一种地图融合方法,所述方法可以应用于电子设备,所述方法可以包括以下步骤S501至步骤S514:
步骤S501,获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
步骤S502,从所述多个第二体素中,选取与所述第n个第一局部地图中每一第一体素相匹配的初始目标体素;n为大于0的整数;
在一些实施例中,电子设备可以通过如下实施例中的步骤S602至步骤S604,选取初始目标体素。实际上,通过步骤S502可以选取出与第一局部地图中的第一体素可能匹配的第二体素。可以理解地,初始目标体素可能不是与第一体素真正匹配的对象,因此,需要通过如下步骤S503至步骤S510,进一步确定初始目标体素是否是与第一体素真正匹配的对象。
步骤S503,根据所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定所述第n个第一局部地图相对于所述第二局部地图的第一坐标转换关系。
在实现时,可以根据所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,构建误差函数;然后,通过最小二乘法求解当前最优的第一坐标转换关系。例如,包括h个第一体素的第一坐标的集合表示为P={p 1,p 2,...,p s,...,p h},第一体素的第一坐标用p s来表示,与所述h个第一体素匹配的初始目标体素的第二坐标的集合表示为Q={q 1,q 2,...,q s,...,q h},初始目标体素的第二坐标用q s来表示,那么,可以列出如下式(1):
Figure PCTCN2020116930-appb-000001
式中,E(R,T)为误差函数,R和T分别为待求解的第一坐标转换关系中的旋转矩阵和平移向量。那么,可以通过最小二乘法求解式(1)中R和T的最优解。
步骤S504,根据所述第一坐标转换关系、所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定匹配误差。
在实现时,可以通过如下实施例中的步骤S606和步骤S607确定所述匹配误差。
步骤S505,统计确定匹配误差的次数。
可以理解地,在处理当前的第一局部地图时,每确定一次匹配误差,就统计确定匹配误差的次数。在一些实施例中,在处理下一个第一局部地图时,可以将当前统计的次数清零。
步骤S506,确定所述次数是否大于第二阈值;如果是,执行步骤S507;否则,执行步骤S508。
可以理解地,如果所述次数大于第二阈值,说明第n个第一局部地图与当前第二局部地图是不匹配的,例如两者没有相匹配的体素。此时,可以将当前统计的确定匹配误差的次数清零之后,执行步骤S502至步骤506,以确定第n+1个(即下一个)第一局部地图是否与当前第二局部地图相配。
步骤S507,生成表征匹配失败的匹配结果,并继续从所述多个第二体素中,选取与下一个第一局部地图中每一第一体素匹配的初始目标体素,直到生成表征匹配成功的匹配结果为止,进入步骤S510。
步骤S508,确定所述匹配误差是否大于第一阈值;如果是,返回步骤S502,重新选取初始目标体素,并重新确定匹配误差;否则,执行步骤S509。
步骤S509,生成表征匹配成功的匹配结果。
步骤S510,在所述匹配结果表征匹配成功的情况下,将所述第n个第一局部地图确定为所述目标局部地图,然后进入步骤S512。
可以理解地,如果所述匹配误差大于第一阈值,说明当前选取的初始目标体素并不是与当前第一局部地图中第一体素相匹配的对象。此时,还需要返回步骤S502,重新选取初始目标体素,并基于重新选取的初始目标体素,重新执行步骤S503至步骤S504,以重新确定匹配误差,直至重新确定的匹配误差小于所述第一阈值时,则确定当前迭代中选取的初始目标体素是与当前第一局部地图中第一体素真正匹配的第二体素,此时进入步骤S512。
步骤S511,在所述匹配结果表征匹配失败的情况下,继续根据所述迭代策略,将下一个第一局部地图中每一第一体素的第一坐标分别与所述多个第二体素的第二坐标进行匹配,直到从所述第一局部地图集合中确定出所述目标局部地图为止,进入步骤S512。
步骤S512,根据确定的匹配误差小于或等于所述第一阈值时的第一坐标转换关系,对所述目标局部地图中每一第一体素的第一坐标进行坐标转换,得到与所述第一体素对应的第五坐标。也就是说,第一体素的第五坐标为在第二局部地图中的坐标值。
步骤S513,根据每一所述第一体素的第五坐标和所述第二局部地图中每一第二体素的第二坐标,对所述目标局部地图和所述第二局部地图进行融合,得到融合后的第二局部地图。
电子设备可以通过如下实施例的步骤S616至步骤S618得到融合后的第二局部地图。
步骤S514,从所述第一局部地图集合中剩余的第一局部地图中,确定出与所述融合后的第二局部地图相匹配的新的目标局部地图,以将所述新的目标局部地图融合至所述融合后的第二局部地图中,直到所述第一局部地图集合中每一所述第一局部地图均被融合至当前第二局部地图中为止,得到所述目标全局地图。
本申请实施例再提供一种地图融合方法,所述方法可以应用于电子设备,所述方法可以包括以下步骤S601至步骤S619:
步骤S601,获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
步骤S602,获取所述第n个第一局部地图相对于所述第二局部地图的第二坐标转换关系,n为大于0的整数。在实现时,可以将所述第二坐标转换关系设置一个初始值。
步骤S603,根据所述第二坐标转换关系,对所述第n个第一局部地图中第j个第一体素的第一坐标进行坐标转换,得到所述第j个第一体素的第三坐标,j为大于0的整数;
步骤S604,将所述第三坐标与所述多个第二体素的第二坐标进行匹配,得出与所述第j个第一体素相匹配的初始目标体素。
在实现时,可以确定第j个第一体素的第三坐标与每一所述第二体素的第二坐标之间的距离(例如欧式距离等),然后将距离第j个第一体素最近的第二体素确定为初始目标体素,或者将距离小于或等于距离阈值的第二体素确定为初始目标体素。需要说明的是,第n个第一局部地图可以是第一局部地图集合中的任一第一局部地图,第j个第一体素可以是第n个第一局部地图中的任一第一体素。
步骤S605,根据所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定所述第n个第一局部地图相对于所述第二局部地图的第一坐标转换关系。
步骤S606,根据所述第一坐标转换关系,将所述第n个第一局部地图中第j个第一体素的第一坐标进行坐标转换,得到所述第j个第一体素的第四坐标,j为大于0的整数。
步骤S607,根据所述第n个第一局部地图中每一第一体素的第四坐标和对应的初始目标体素的第二坐标,确定所述匹配误差。
电子设备在实现步骤S607时,可以先确定所述第n个第一局部地图中每一第一体素的第四坐标与对应的初始目标体素的第二坐标之间的第一距离(例如欧式距离等);再根据每 一所述第一距离,确定所述匹配误差。
在一些实施例中,可以将多个第一体素与匹配的初始目标体素之间的平均距离,确定为所述匹配误差。例如,包括h个第一体素的第四坐标p′ s的集合表示为P′={p′ 1,p′ 2,...,p′ s,...,p′ h},与所述h个第一体素匹配的初始目标体素的第二坐标q s的集合表示为Q={q 1,q 2,...,q s,...,q h},那么通过如下公式(2)可以求取匹配误差d:
Figure PCTCN2020116930-appb-000002
式中||p′ s-q s|| 2表示第一体素与匹配的初始目标体素之间的欧式距离。
步骤S608,统计确定匹配误差的次数;
步骤S609,确定所述次数是否大于第二阈值;如果是,执行步骤S610;否则,执行步骤S611;
步骤S610,生成表征匹配失败的匹配结果,并返回执行步骤S602,继续获取下一个第一局部地图相对于所述第二局部地图的第二坐标转换关系,直到生成表征匹配成功的匹配结果为止,进入步骤S613;
步骤S611,确定所述匹配误差是否大于第一阈值;如果是,将所述第一坐标转换关系作为所述第二坐标转换关系,然后返回执行步骤S603,以重新选取初始目标体素;否则,执行步骤S612。
可以理解地,如果所述匹配误差大于第一阈值,说明获取的第二坐标转换关系是不符合实际的。换句话说,得出的初始目标体素不是真正与第一体素相匹配的对象,此时,可以将第一坐标转换关系作为所述第二坐标转换关系,重新执行步骤S603至步骤S610,直至匹配误差小于所述第一阈值为止,执行步骤S612。
步骤S612,生成表征匹配成功的匹配结果;
步骤S613,在所述匹配结果表征匹配成功的情况下,将所述第n个第一局部地图确定为所述目标局部地图,然后进入步骤S615。
需要说明的是,如果表征匹配成功的匹配结果为电子设备将下一个第一局部地图中每一第一体素的第一坐标与所述多个第二体素的第二坐标进行匹配所得到的匹配结果,则此时将所述下一个第一局部地图确定为目标局部地图。也就是说,在当前匹配结果表征匹配成功的情况下,将当前被匹配的第一局部地图确定为目标局部地图。
步骤S614,在所述匹配结果表征匹配失败的情况下,继续根据所述迭代策略,将下一个第一局部地图中每一第一体素的第一坐标分别与所述多个第二体素的第二坐标进行匹配,直到从所述第一局部地图集合中确定出所述目标局部地图为止,进入步骤S615。
步骤S615,根据确定的匹配误差小于或等于所述第一阈值时的第一坐标转换关系,对所述目标局部地图中每一第一体素的第一坐标进行坐标转换,得到与所述第一体素对应的第五坐标;
步骤S616,确定所述目标局部地图中第k个第一体素的第五坐标分别与所述第二局部地图中每一所述第二体素的第二坐标之间的第二距离,以得到第二距离集合,k为大于0的整数。第k个第一体素为目标局部地图中的任一第一体素。
步骤S617,在所述第二距离集合中存在满足第二条件的目标第二距离的情况下,根据所述第k个第一体素的第一坐标和第五坐标,更新所述目标第二距离对应的目标第二体素的第二坐标。
对于第二条件不做限定。在一些实施例中,第二条件为小于或等于第三阈值;在另一些实施例中,第二条件为最小距离。
电子设备可以通过如下实施例中的步骤S701至步骤S704,更新目标第二体素的第二坐标。
可以理解地,将目标局部地图融合至第二局部地图中,通过步骤S616至步骤618实现。 也就是说,一方面,电子设备可以根据与第二体素相匹配的第一体素的第一坐标和第五坐标,更新该第二体素的第二坐标;另一方面,如果第二局部地图中没有与第一体素相匹配的第二体素,则将该第一体素作为第二局部地图中新的第二体素,将该第二体素的第五坐标作为该新的第二体素的第二坐标;如此,既可以避免因为直接将目标局部地图中第一体素的第五坐标添加至第二局部地图中所导致的信息冗余问题,还能够使得目标第二体素的第二坐标更加平滑,从而减小融合误差,提高地图融合精度,进而提高视觉定位的定位精度。
步骤S618,在所述第二距离集合中没有满足所述第二条件的目标第二距离的情况下,将所述第k个第一体素作为所述第二局部地图中的新的第二体素,将所述第k个第一体素的第五坐标作为所述新的第二体素的第二坐标;重复上述步骤S616至步骤S618,以将所述目标局部地图中每一第一体素的第五坐标融合至所述第二局部地图中,从而得到融合后的第二局部地图,然后进入步骤S619。
例如,第二距离集合中的每一第二距离均大于第三阈值,则说明第二局部地图中没有与所述第k个第一体素相匹配的目标第二体素,此时执行步骤S618,将所述第k个第一体素的第五坐标作为第二局部地图中新的元素。
步骤S619,从所述第一局部地图集合中剩余的第一局部地图中,确定出与所述融合后的第二局部地图相匹配的新的目标局部地图,以将所述新的目标局部地图融合至所述融合后的第二局部地图中,直到所述第一局部地图集合中每一所述第一局部地图均被融合至当前第二局部地图中为止,得到所述目标全局地图。
可以理解地,电子设备通过多次执行类似于步骤S602至步骤S618,即可获得目标全局地图。
在一些实施例中,对于步骤S617,所述根据所述第k个第一体素的第一坐标和第五坐标,更新所述目标第二距离对应的目标第二体素的第二坐标,电子设备可以通过以下步骤S701至步骤S704实现:
步骤S701,获取与所述目标第二体素对应的第一距离模型;
步骤S702,获取所述目标第二体素到物体表面的历史第三距离;
步骤S703,将所述第k个第一体素的第一坐标的Z轴坐标值、所述第k个第一体素的第五坐标的Z轴坐标值和所述历史第三距离,输入至所述第一距离模型中,以更新所述历史第三距离,得到更新后的第三距离。
在一些实施例中,第一距离模型如公式(3)所示:
Figure PCTCN2020116930-appb-000003
式(3)中,W t表示在当前时刻t目标第二体素的权重;W t-1表示在前一时刻t-1目标第二体素的权重;maxweight为在前一时刻t-1所有第二体素中的最大权重;z 1表示第k个第一体素的第一坐标的Z轴坐标值;z 5表示第k个第一体素的第五坐标的Z轴坐标值;maxtruncation和mintruncation分别表示截断范围的最大值和最小值;D t-1表示在前一时刻t-1确定的目标第二体素到物体表面的距离,也就是目标第二体素到物体表面的历史第三距离的一种示例;而D t则是当前待求的更新后的第三距离。
这样将第k个第一体素的第五坐标的Z轴坐标值z 5和第k个第一体素的第一坐标的Z轴坐标值z 1输入至公式(3)所示的第一距离模型中,即可更新历史第三距离D t-1,从而更新目标第二体素的第二坐标。
步骤S704,将所述更新后的第三距离,更新为所述目标第二体素的第二坐标的Z轴坐标值。
在本申请实施例中,在所述第二距离集合中存在满足第二条件的目标第二距离的情况下,根据第k个第一体素的第一坐标的Z轴坐标值、第k个第一体素的第五坐标的Z轴坐标值和目标第二体素到物体表面的历史第三距离,更新所述历史第三距离,以将更新后的第三距离,更新为目标第二体素的第二坐标的Z轴坐标值。由于在更新目标第二体素的第二坐标时,考虑了目标第二体素到物体表面的历史第三距离,如此,使得更新后的第二坐标更加平滑,从而能够获得更好的融合精度。
一般来说,第一局部地图和第二局部地图所覆盖的物理空间是不同的。例如,第一局部地图覆盖的物理空间为房间1,第二局部地图覆盖的物理空间为房间2。但是,第一局部地图和第二局部地图的构建过程是类似的,在进行局部地图构建时,图像采集模组采集多个样本图像对时所在的物理空间不同。以所述第二局部地图为例,该地图的构建过程可以包括以下步骤S801至步骤S803:
步骤S801,对特定物理空间的尺寸进行量化处理,得到多个所述第二体素的初始坐标。
可以理解地,特定物理空间指的是第二局部地图所覆盖的物理场景。例如,特定物理空间为某栋大楼的某个房间等。第二体素实际上是具有特定尺寸的立方体,也就是该特定物理空间中的最小单位。如图3所示,将特定物理空间看作一个具有一定尺寸的立方体301,然后以第二体素302为单位,对该立方体进行量化,得到多个第二体素;以特定的坐标系(例如世界坐标系)为参考坐标系,确定每一第二体素的初始坐标。举例来说,特定物理空间的尺寸为512×512×512m 3,第二体素的尺寸为1×1×1m 3,那么以1×1×1m 3大小的第二体素为单位,对512×512×512m 3大小的物理空间进行量化处理,可以得到512×512×512个第二体素的初始坐标。在一些实施例中,量化处理包括量化特定物理空间的尺寸和确定每一第二体素的初始坐标。
步骤S802,根据图像采集模组在所述特定物理空间中采集的多个所述样本图像对,对每一所述第二体素的初始坐标进行更新,得到每一所述第二体素的第二坐标。
可以理解地,二维样本图像是指不包含深度信息的平面图像,例如二维样本图像为RGB图像。在实现时,图像采集模组可以通过自身安装的第一摄像头采集二维样本图像。深度样本图像是指包含深度信息的图像,在实现时,图像采集模组可以通过自身安装的第二摄像头模组(例如双目摄像头)采集深度样本图像。电子设备可以通过如下实施例的步骤S902至步骤S904实现步骤S802。
步骤S803,根据每一所述第二体素的第二坐标,构建所述第二局部地图。即,第二局部地图中包括每一第二体素的第二坐标,但是不包括像素点的图像特征,这样既能够减小第二局部地图的数据量,也能够保证第二局部地图的私密性,具有较好的隐私保护效果。
可以理解地,图像采集模组在不同时刻或者不同位置采集样本图像时,其拍摄场景是有重叠区域的。也就是说,不同的样本图像中包括部分相同的内容,这使得在基于这些样本图像构建第二局部地图时,引入了大量的冗余信息。物理空间中的同一位置点可能被多个像素点以相同或相近的坐标表示在第二局部地图中,这样就大大增加了第二局部地图的数据量,影响了第二局部地图的构建过程,对于获得高精度的地图融合结果也是不利的。并且,对于第一局部地图来讲,其数据量太大,不利于多个电子设备以众包的形式发送给实现地图融合的电子设备,从而限制了地图融合的应用场景,降低了地图融合的效率。
因此,在本申请实施例中,以第二体素的形式构建第二局部地图,即,通过采集的多个样本图像对,对每一第二体素的初始坐标进行更新,从而得到包括每一第二体素的第二坐标的第二局部地图。这种构建第二局部地图的方式,相当于将第二体素所涵盖的所有像素点的坐标融合为一个坐标;如此,就解决了物理空间中的同一位置点被多个像素点以相同或相近的坐标表示在第二局部地图中所带来的上述问题,去除了大量的冗余信息。
本申请实施例再提供一种第二局部地图的构建过程,该过程可以包括以下步骤S901至步骤S905:
步骤S901,对特定物理空间的尺寸进行量化处理,得到多个所述第二体素的初始坐标;
步骤S902,控制图像采集模组按照预设帧率采集所述样本图像对;
在实现时,图像采集模组可以边移动边采集样本图像对。例如,可以通过具有图像采集模组的机器人实现样本图像对的采集。
步骤S903,根据所述图像采集模组在当前时刻采集的第一样本图像对和在历史时刻采集的第二样本图像对,更新每一所述第二体素的初始坐标。
电子设备可以通过如下实施例的步骤S113至步骤S115实现步骤S903。
步骤S904,根据所述第一样本图像对和所述图像采集模组在下一时刻采集的第三样本图像对,继续更新每一所述第二体素的当前坐标,直到样本图像采集结束时,将所述第二体素的当前坐标作为所述第二坐标。
实际上,通过步骤S903和步骤S904,电子设备可以根据图像采集模组当前时刻采集的样本图像对和历史时刻采集的样本图像对,实时地更新每一所述第二体素的当前坐标,直到图像采集模组的图像采集任务结束为止,将当前更新得到的每一第二体素的坐标作为与所述第二体素对应的第二坐标。
步骤S905,根据每一所述第二体素的第二坐标,构建所述第二局部地图。
在本申请实施例中,边采集样本图像对,边根据采集的样本图像对更新每一所述第二体素的当前坐标。也就是说,电子设备不断地利用图像采集模组在当前时刻采集的样本图像对和在历史时刻(例如前一时刻)采集的样本图像对,更新每一所述第二体素的当前坐标。由于前后时刻获得的两张样本图像,具有较多的重叠区域,因此通过这种方法,电子设备就无需从多个样本图像对中找出重叠区域最多的两个样本图像对,然后在基于这两个样本图像对更新每一所述第二体素的当前坐标;如此,能够大大提高地图构建的效率。
本申请实施例再提供一种第二局部地图的构建过程,该过程可以包括以下步骤S111至步骤S117:
步骤S111,对特定物理空间的尺寸进行量化处理,得到多个所述第二体素的初始坐标;
步骤S112,控制所述图像采集模组按照预设帧率采集所述样本图像对;
步骤S113,根据所述第一样本图像对和所述第二样本图像对,确定每一所述第二体素的当前相机坐标。
电子设备在实现步骤S113时,可以根据第一样本图像对和第二样本图像对,确定相机坐标系相对于第二局部地图所在的坐标系的当前转换关系;根据当前转换关系,将每一第二体素的初始坐标转换为当前相机坐标。
在一些实施例中,电子设备可以根据第一样本图像对中二维样本图像的像素点的图像特征、第一样本图像对中深度样本图像的像素点的深度值、以及第二样本图像对中二维样本图像的像素点的图像特征和第二样本图像对中深度样本图像的像素点的深度值,确定当前变换关系。基于此,根据如下公式(4)将第二体素的初始坐标转换为当前相机坐标。
Figure PCTCN2020116930-appb-000004
式中,(x c,y c,z c)表示的是相机坐标,变换关系包括旋转矩阵R和平移向量T,(x w,y w,z w)表示的是初始坐标。
步骤S114,从所述第一样本图像对的深度样本图像中,获取与每一所述第二体素的当前像素坐标对应的深度值。
电子设备在实现步骤S114时,可以根据图像采集模组的内参矩阵,将每一第二体素的 当前相机坐标转换为当前像素坐标;从第一样本图像对的深度样本图像中,获取与每一第二体素的当前像素坐标对应的深度值。
步骤S115,根据每一所述第二体素的当前相机坐标和与每一所述第二体素的当前像素坐标对应的深度值,更新与所述第二体素对应的初始坐标。
电子设备在实现步骤S115时,可以获取所述多个第二体素中第m个第二体素对应的第二距离模型;获取所述第m个第二体素到物体表面的历史第四距离;将所述第m个第二体素的当前相机坐标的Z轴坐标值、与所述第m个第二体素的当前像素坐标对应的深度值和所述历史第四距离,输入至所述第二距离模型中,以更新所述历史第四距离,得到更新后的第四距离;将每一所述第二体素对应的更新后的第四距离,更新为与所述第二体素对应的初始坐标中的Z轴坐标值,以实现对与所述第二体素对应的初始坐标进行更新。
在一些实施例中,第二体素对应的第二距离模型如下式(5)所示:
Figure PCTCN2020116930-appb-000005
式(5)中,W t表示在当前时刻t所述第二体素的权重;W t-1表示在前一时刻t-1第二体素的权重;maxweight为在前一时刻t-1所有第二体素中的最大权重;D t(u,v)表示与第二体素的当前像素坐标对应的深度值;z c表示第二体素的当前相机坐标的Z轴坐标值;maxtruncation和mintruncation分别表示截断范围的最大值和最小值;D t-1表示在前一时刻t-1确定的第二体素到物体表面的距离,也就是目标第二体素到物体表面的历史第四距离的一种示例;而D t则是当前待求的更新后的第四距离。
这样将第二体素的当前相机坐标的Z轴坐标值z c、第二体素的当前像素坐标对应的深度值D t(u,v)和所述历史第四距离输入至公式(5)所示的距离模型中,即可更新历史第四距离D t-1,从而更新第二体素的初始坐标。
步骤S116,根据所述第一样本图像对和所述图像采集模组在下一时刻采集的第三样本图像对,继续更新每一所述第二体素的当前坐标,直到样本图像采集结束时,将所述第二体素的当前坐标作为所述第二坐标。
可以理解地,电子设备在获得第三样本图像对之后,通过执行类似于步骤S113至步骤S115,以继续更新每一所述第二体素的当前坐标。
步骤S117,根据每一所述第二体素的第二坐标,构建所述第二局部地图。
通过视觉信息可以建立室内地图,在构建室内地图的过程中不可避免地会遇到地图更新的需求,比如在多次数据采集后的地图融合、多设备协作建图的情况下等。针对依赖视觉信息的地图更新方法,相关技术是这样实现的:获取局部地图,所述局部地图包含局部坐标系信息和扫描到的地图点云信息;根据扫描到的地图点云中点的法相分布频率得到方向直方图;根据正交投影将扫描到的地图点云从离散的方向加权投影到线上得到投影直方图;计算直方图相关性,快速匹配相近的第二局部地图和第一局部地图;根据第二局部地图和第一局部地图对应的方向直方图计算两者之间的旋转关系;根据第二局部地图和第一局部地图对应的投影直方图,计算两者之间的平移关系;根据旋转关系和平移关系合成第二局部地图和第一局部地图。重复上述步骤,直到所有局部地图被融合为一全局地图为止,地图构建完毕。该方案的核心技术点是:第一,计算地图点云的法相分布频率得到方向直方图;第二,加权投影到线上得到投影直方图;第三,计算旋转关系和平移关系。
相关技术中,旋转关系和平移关系是通过计算两局部地图的直方图相关性得到的,当两局部地图重叠的区域不多的情况下,两局部地图的相关性较低,匹配的鲁棒性较低,导致计算的旋转关系和平移关系误差较大,从而降低了地图融合精度;
相关技术中,计算旋转关系和平移关系是依赖于点云的法相特征,该特征的准确性不高,易出现误差,导致计算所得到的地图旋转和平移的精确性较低。
相关技术中,在一个局部地图合成到全局地图后就不再优化调整,存在累计误差,造成合成的全局地图一致性较低。
基于此,下面将说明本申请实施例在一个实际的应用场景中的示例性应用。
本申请实施例实现了一种基于稠密点云的室内地图更新技术,可以帮助用户创建稠密点云形式的室内地图,并实现多个局部地图融合以及地图更新的目标。该方案可以支持室内多个有重复区域的局部地图合成的需求。局部地图的收集可以通过众包的形式无序地进行采集。该方案可以支持地图融合、地图更新、多人建图等日常任务,地图更新的精度高、鲁棒性强。
在本申请实施例中,构建地图部分主要是通过单目摄像头采集RGB图像信息,提取图像特征进行视觉追踪,同时利用三维视觉传感器(如TOF、结构光等),采集深度信息构建稠密点云。构建一个稠密点云形式的局部地图(即第一局部地图或第二局部地图的一种示例)的具体技术步骤可以包括步骤S11至步骤S15:
步骤S11,利用单目摄像头,以固定帧率进行RGB图像采集;
步骤S12,利用三维视觉传感器,以固定帧率进行深度图像采集;
步骤S13,将RGB图像和深度图像进行对齐,包括时间戳对齐和像素对齐;
步骤S14,采集过程中实时提取RGB图像中的特征信息和深度图像中的深度信息,以对图像采集模组进行视觉追踪和运动估计,确定相机坐标系相对于世界坐标系的当前转换关系;
步骤S15,根据得到的多张深度图像和TSDF算法,以体素的形式构建一个稠密点云形式的局部地图。
针对步骤S12中的利用三维视觉传感器进行深度图像采集,这里给出如下解释。深度图像又被称为距离图像,是指从图像采集模组到拍摄场景中各点的距离作为像素值的图像。深度图像直观反映了事物可见表面的几何形状。在深度数据流所提供的图像帧中,每一个像素点代表的是在三维视觉传感器的视野中,该特定的坐标处物体到摄像头平面的距离。
针对步骤S15中的根据得到的多张深度图像和TSDF算法,以体素的形式构建一个稠密点云形式的局部地图,这里给出如下技术步骤S151至步骤S154:
步骤S151,首先获取体素在全局坐标系下的坐标V(x g,y g,z g),然后根据运动追踪得到的变换矩阵(即步骤S14输出的当前转换关系)将其从全局坐标转换为相机坐标V(x c,y c,z c);
可以通过如下的公式(6)将深度图像中的像素点(u,v)转换到相机坐标V(x c,y c,z c):
Figure PCTCN2020116930-appb-000006
式中R和T分别为当前转换关系中的旋转矩阵和平移向量。
步骤S152,如下公式(7)所示,根据相机的内参矩阵将相机坐标V(x c,y c,z c)转换为图像坐标,得到一个图像坐标(u,v);
Figure PCTCN2020116930-appb-000007
式中(u 0,v 0)是深度图像的中心坐标点,z c表示相机坐标的z轴值,也就是该像素点(u,v)对应的深度值;
Figure PCTCN2020116930-appb-000008
表示焦距f在相机坐标系的x轴上的焦距分量;
Figure PCTCN2020116930-appb-000009
表示焦距f在相机坐标系的y轴上的焦距分量。需要说明的是,由于相机坐标系的Z轴为镜头的光轴,因此像素点(u,v)的深度值即为该像素点的相机坐标的Z轴坐标值z c。相机坐标和世界坐标下的同一物体具有相同的深度,即z c=z w
步骤S153,如果第l帧深度图像在图像坐标(u,v)处的深度值D(u,v)不为0,则比较D(u,v)与体素的相机坐标V(x,y,z)中z的大小如果D(u,v)<z,说明此体素距离相机更远,在融合表面的内部;否则,说明此体素距离相机更近,在融合表面的外部;
步骤S154,根据步骤S153的结果更新此体素中距离值D l和权重值W l,更新公式如下式(8)所示:
Figure PCTCN2020116930-appb-000010
式中,W l(x,y,z)为当前帧全局数据立方体中体素的权重,W l-1(x,y,z)为上一帧全局数据立方体中体素的权重,maxweight为上一帧全局数据立方体中所有体素的权重中的最大权重,可以设定为1,D l(x,y,z)当前全局数据立方体中体素到物体表面的距离,D l-1(x,y,z)为上一帧全局数据立方体重体素到物体表面的距离,d l(x,y,z)为根据当前帧深度数据计算得到的全局数据立方体中体素到物体表面的距离,z表示体素在相机坐标系下的z轴坐标,D l(u,v)表示当前帧深度图像在像素点(u,v)处的深度值,[mintruncation,maxtruncation]为截断范围,其会影响到重建结果的精细程度。
基于步骤S11至步骤S15可以构建出一张基于稠密点云的局部地图。
在本申请实施例中,地图更新部分主要是通过迭代最近点算法算法(Iterative Closest Point,ICP)匹配两个局部地图的稠密点云,从而求解出当前第一局部地图相对于第二局部地图的精确位姿(即所述第一坐标转换关系),然后利用TSDF算法以体素的形式将第一局部地图融合至第二局部地图中。具体的技术步骤如下步骤S21至步骤S24:
步骤S21,加载构建好的第二局部地图和第一局部地图,以第二局部地图坐标系为全局坐标系;
步骤S22,通过ICP算法匹配第一局部地图中和第二局部地图中的稠密点云,得到当前第一局部地图在全局坐标系(即第二局部地图)中的精确位姿;
步骤S23,利用TSDF算法以体素的形式将第一局部地图融合至第二局部地图中;
步骤S24,反复执行步骤S22至步骤S23,将其他局部地图融合到第二局部地图中。
其中,针对步骤S22中的通过ICP算法匹配第一局部地图中和第二局部地图中的稠密点云,得到当前第一局部地图在第二局部地图中的精确位姿,这里给出如下解释。ICP算法其 本质上是基于最小二乘法的最优配准方法。该算法重复进行选择对应关系点对,计算最优刚体变换,直到满足正确配准的收敛精度要求。ICP算法的基本原理是:分别在待匹配的第一局部地图P和第二局部地图Q中,按照一定的约束条件,找到最邻近的点(p i,q i);然后计算出最优的旋转R和平移T,使得误差函数最小,误差函数E(R,T)如下公式(9)所示:
Figure PCTCN2020116930-appb-000011
式中n为邻近点对的数量,p i为第一局部地图P中的一点,q i为第二局部地图Q中与p i对应的最近点,R为旋转矩阵,T为平移向量。算法具体步骤如下步骤S221至步骤S226:
步骤S221,在当前第一局部地图P中取点集p i∈P;
步骤S222,找出第二局部地图Q中的对应点集q i∈Q,使得||q i-p i||=min;
步骤S223,计算旋转矩阵R和平移矩阵T,使得误差函数最小;
步骤S224,对p i使用步骤S223求得的旋转矩阵R和平移矩阵T进行旋转和平移变换,得到新的对应点集p' i={p' i=Rp i+T,p i∈P};
步骤S225,计算p' i与对应点集q i的平均距离
Figure PCTCN2020116930-appb-000012
步骤S226,若d小于给定阈值d TH或者大于迭代次数阈值,则停止迭代计算,算法输出当前的旋转矩阵R和平移矩阵T;否则跳回到步骤S222。
针对步骤S23中的利用TSDF算法以体素的形式将第一局部地图融合至第二局部地图中,可参考步骤S15。
基于步骤S21至步骤S23可以达成对多个预先构建的稠密点云局部地图进行更新、融合的目的。该地图更新方案具有融合精度高、抗环境干扰性强和鲁棒性强等优点。
在本申请实施例中,利用三维视觉传感器得到了深度信息,利用深度信息得到稠密点云的方式建图,不会受到光照变换情况下的影响,地图更新的鲁棒性较高;
本申请实施例提供的地图构建方法和地图融合方法,能够获得以下有益效果:1、采用了高精度高鲁棒性的匹配算法,在地图融合的结果上相对于其他地图融合方法提高了融合精度;2、存储的地图形式为稠密点云,不需要视觉特征的描述子信息,在地图大小上较之其他方法有一定程度的压缩;3、构建的地图形式为稠密点云地图,不需要存储环境的RGB信息,因此地图的私密性较好。
在本申请实施例中,主要利用三维视觉传感器采集深度信息构建局部地图,并结合高精度高鲁棒性的点云匹配算法达成室内地图更新目的。在地图构建上,通过使用三维视觉传感器采集深度图像信息,以稠密点云的形式存储为离线地图。在地图更新方法上,采用ICP算法匹配第一局部地图和第二局部地图,精确地计算出第一局部地图相对于第二局部地图的转换关系。最后结合TSDF算法融合多个局部地图,形成了一套融合准确度高、鲁棒性强的地图更新方案。该方案支持多人建图场景下的地图融合,以及众包形式的地图更新,在保证地图融合稳定性的同时,也提升了构建局部地图的效率。
基于前述的实施例,本申请实施例提供一种地图融合装置,该装置包括所包括的各模块、以及各模块所包括的各单元,可以通过电子设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图4为本申请实施例地图融合装置的组成结构示意图,如图4所示,所述装置400包括地图获取模块401和地图融合模块402,其中:地图获取模块401,配置为获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;地图融合模块402,配置为根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所 述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。
在一些实施例中,地图融合模块402,配置为根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,依次将所述第一局部地图集合中满足第一条件的第一局部地图,融合至当前第二局部地图中,以得到所述目标全局地图。
在一些实施例中,地图融合模块402,包括:确定子模块,配置为根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,从所述第一局部地图集合中,确定出与所述第二局部地图相匹配的目标局部地图;融合子模块,配置为将所述目标局部地图融合至所述第二局部地图中,得到融合后的第二局部地图;从所述第一局部地图集合中剩余的第一局部地图中,确定出与所述融合后的第二局部地图相匹配的新的目标局部地图,以将所述新的目标局部地图融合至所述融合后的第二局部地图中,直到所述第一局部地图集合中每一所述第一局部地图均被融合至当前第二局部地图中为止,得到所述目标全局地图。
在一些实施例中,所述确定子模块,包括:匹配单元,配置为根据迭代策略,将所述第一局部地图集合中第n个第一局部地图的每一第一体素的第一坐标分别与所述第二局部地图中多个第二体素的第二坐标进行匹配,得到匹配结果,n为大于0的整数;确定单元,配置为在所述匹配结果表征匹配成功的情况下,将所述第n个第一局部地图确定为所述目标局部地图;在所述匹配结果表征匹配失败的情况下,继续根据所述迭代策略,将下一个第一局部地图中每一第一体素的第一坐标分别与所述多个第二体素的第二坐标进行匹配,直到从所述第一局部地图集合中确定出所述目标局部地图为止。
在一些实施例中,所述匹配单元,配置为:从所述多个第二体素中,选取与所述第n个第一局部地图中每一第一体素相匹配的初始目标体素;根据所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定所述第n个第一局部地图相对于所述第二局部地图的第一坐标转换关系;根据所述第一坐标转换关系、所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定匹配误差;如果所述匹配误差大于第一阈值,重新选取初始目标体素,并重新确定匹配误差;如果所述匹配误差小于或等于所述第一阈值,生成表征匹配成功的匹配结果。
在一些实施例中,所述匹配单元,还配置为:如果确定匹配误差的次数大于第二阈值,生成表征匹配失败的匹配结果,并继续从所述多个第二体素中,选取与下一个第一局部地图中每一第一体素匹配的初始目标体素,直到生成表征匹配成功的匹配结果为止。
在一些实施例中,所述匹配单元,配置为:获取所述第n个第一局部地图相对于所述第二局部地图的第二坐标转换关系;根据所述第二坐标转换关系,对所述第n个第一局部地图中第j个第一体素的第一坐标进行坐标转换,得到所述第j个第一体素的第三坐标,j为大于0的整数;将所述第三坐标与所述多个第二体素的第二坐标进行匹配,得出与所述第j个第一体素相匹配的初始目标体素。
在一些实施例中,所述匹配单元,配置为:根据所述第一坐标转换关系,将所述第n个第一局部地图中第j个第一体素的第一坐标进行坐标转换,得到所述第j个第一体素的第四坐标,j为大于0的整数;根据所述第n个第一局部地图中每一第一体素的第四坐标和对应的初始目标体素的第二坐标,确定所述匹配误差。
在一些实施例中,所述匹配单元,配置为:确定所述第n个第一局部地图中每一第一体素的第四坐标与对应的初始目标体素的第二坐标之间的第一距离;根据每一所述第一距离,确定所述匹配误差。
在一些实施例中,所述匹配单元,配置为:如果所述匹配误差大于所述第一阈值,将所述第一坐标转换关系作为所述第二坐标转换关系,重新选取初始目标体素。
在一些实施例中,所述融合子模块,包括:坐标转换单元,配置为在所述匹配结果表征 匹配成功的情况下,根据确定的匹配误差小于或等于所述第一阈值时的第一坐标转换关系,对所述目标局部地图中每一第一体素的第一坐标进行坐标转换,得到与所述第一体素对应的第五坐标;地图融合单元,配置为根据每一所述第一体素的第五坐标和所述第二局部地图中每一第二体素的第二坐标,对所述目标局部地图和所述第二局部地图进行融合,得到融合后的第二局部地图。
在一些实施例中,所述地图融合单元,配置为:确定所述目标局部地图中第k个第一体素的第五坐标分别与所述第二局部地图中每一所述第二体素的第二坐标之间的第二距离,以得到第二距离集合;在所述第二距离集合中存在满足第二条件的目标第二距离的情况下,根据所述第k个第一体素的第一坐标和第五坐标,更新所述目标第二距离对应的目标第二体素的第二坐标,k为大于0的整数;在所述第二距离集合中没有满足所述第二条件的目标第二距离的情况下,将所述第k个第一体素作为所述第二局部地图中的新的第二体素,将所述第k个第一体素的第五坐标作为所述新的第二体素的第二坐标。重复上述步骤,以将所述目标局部地图中每一第一体素的第五坐标融合至所述第二局部地图中,从而得到融合后的第二局部地图。
在一些实施例中,所述地图融合单元,配置为:获取与所述目标第二体素对应的第一距离模型;获取所述目标第二体素到物体表面的历史第三距离;将所述第k个第一体素的第一坐标的Z轴坐标值、所述第k个第一体素的第五坐标的Z轴坐标值和所述历史第三距离,输入至所述第一距离模型中,以更新所述历史第三距离,得到更新后的第三距离;将所述更新后的第三距离,更新为所述目标第二体素的第二坐标的Z轴坐标值。
在一些实施例中,所述装置400还包括地图构建模块,所述地图构建模块,配置为:对特定物理空间的尺寸进行量化处理,得到多个所述第二体素的初始坐标;根据图像采集模组在所述特定物理空间中采集的多个所述样本图像对,对每一所述第二体素的初始坐标进行更新,得到每一所述第二体素的第二坐标;根据每一所述第二体素的第二坐标,构建所述第二局部地图。
在一些实施例中,所述地图构建模块,配置为:控制所述图像采集模组按照预设帧率采集所述样本图像对;根据所述图像采集模组在当前时刻采集的第一样本图像对和在历史时刻采集的第二样本图像对,更新每一所述第二体素的初始坐标;根据所述第一样本图像对和所述图像采集模组在下一时刻采集的第三样本图像对,继续更新每一所述第二体素的当前坐标,直到样本图像采集结束时,将所述第二体素的当前坐标作为所述第二坐标。
在一些实施例中,所述地图构建模块,配置为:根据所述第一样本图像对和所述第二样本图像对,确定每一所述第二体素的当前相机坐标;从所述第一样本图像对的深度样本图像中,获取与每一所述第二体素的当前像素坐标对应的深度值;根据每一所述第二体素的当前相机坐标和与每一所述第二体素的当前像素坐标对应的深度值,更新与所述第二体素对应的初始坐标。
在一些实施例中,所述地图构建模块,配置为:获取所述多个第二体素中第m个第二体素对应的第二距离模型;获取所述第m个第二体素到物体表面的历史第四距离;将所述第m个第二体素的当前相机坐标的Z轴坐标值、与所述第m个第二体素的当前像素坐标对应的深度值和所述历史第四距离,输入至所述第二距离模型中,以更新所述历史第四距离,得到更新后的第四距离;将每一所述第二体素对应的更新后的第四距离,更新为与所述第二体素对应的初始坐标中的Z轴坐标值,以实现对与所述第二体素对应的初始坐标进行更新。
在一些实施例中,所述地图构建模块,配置为:根据所述第一样本图像对和所述第二样本图像对,确定相机坐标系相对于所述第二局部地图所在的坐标系的当前转换关系;根据所述当前转换关系,将每一所述第二体素的初始坐标转换为当前相机坐标。
在一些实施例中,所述地图构建模块,配置为:根据所述图像采集模组的内参矩阵,将每一所述第二体素的当前相机坐标转换为当前像素坐标;从所述第一样本图像对的深度样本图像中,获取与每一所述第二体素的当前像素坐标对应的深度值。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的地图融合方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机、服务器等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
对应地,本申请实施例提供一种电子设备,图5为本申请实施例电子设备的一种硬件实体示意图,如图5所示,该电子设备500的硬件实体包括:包括存储器501和处理器502,所述存储器501存储有可在处理器502上运行的计算机程序,所述处理器502执行所述程序时实现上述实施例中提供的地图融合方法中的步骤。
存储器501配置为存储由处理器502可执行的指令和应用,还可以缓存待处理器502以及电子设备500中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
对应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的地图融合方法中的步骤。
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一些实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一些实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如对象A和/或对象B,可以表示:单独存在对象A,同时存在对象A和对象B,单独存在对象B这三种情况。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通 信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机、服务器等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种地图融合方法,所述方法包括:
    获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
    根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;
    其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。
  2. 根据权利要求1所述的方法,其中,所述根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图,包括:
    根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,依次将所述第一局部地图集合中满足第一条件的第一局部地图,融合至当前第二局部地图中,以得到所述目标全局地图。
  3. 根据权利要求2所述的方法,其中,所述根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,依次将所述第一局部地图集合中满足第一条件的第一局部地图,融合至当前第二局部地图中,以得到所述目标全局地图,包括:
    根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,从所述第一局部地图集合中,确定出与所述第二局部地图相匹配的目标局部地图;
    将所述目标局部地图融合至所述第二局部地图中,得到融合后的第二局部地图;
    从所述第一局部地图集合中剩余的第一局部地图中,确定出与所述融合后的第二局部地图相匹配的新的目标局部地图,以将所述新的目标局部地图融合至所述融合后的第二局部地图中,直到所述第一局部地图集合中每一所述第一局部地图均被融合至当前第二局部地图中为止,得到所述目标全局地图。
  4. 根据权利要求3所述的方法,其中,所述根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,从所述第一局部地图集合中,确定出与所述第二局部地图相匹配的目标局部地图,包括:
    根据迭代策略,将所述第一局部地图集合中第n个第一局部地图的每一第一体素的第一坐标分别与所述第二局部地图中多个第二体素的第二坐标进行匹配,得到匹配结果,n为大于0的整数;
    在所述匹配结果表征匹配成功的情况下,将所述第n个第一局部地图确定为所述目标局部地图;
    在所述匹配结果表征匹配失败的情况下,继续根据所述迭代策略,将下一个第一局部地图中每一第一体素的第一坐标分别与所述多个第二体素的第二坐标进行匹配,直到从所述第一局部地图集合中确定出所述目标局部地图为止。
  5. 根据权利要求4所述的方法,其中,所述根据迭代策略,将所述第一局部地图集合中第n个第一局部地图的每一第一体素的第一坐标分别与所述第二局部地图中多个第二体素的第二坐标进行匹配,得到匹配结果,包括:
    从所述多个第二体素中,选取与所述第n个第一局部地图中每一第一体素相匹配的初始目标体素;
    根据所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定所述第n个第一局部地图相对于所述第二局部地图的第一坐标转换关系;
    根据所述第一坐标转换关系、所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定匹配误差;
    如果所述匹配误差大于第一阈值,重新选取初始目标体素,并重新确定匹配误差;
    如果所述匹配误差小于或等于所述第一阈值,生成表征匹配成功的匹配结果。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    如果确定匹配误差的次数大于第二阈值,生成表征匹配失败的匹配结果,并继续从所述多个第二体素中,选取与下一个第一局部地图中每一第一体素匹配的初始目标体素,直到生成表征匹配成功的匹配结果为止。
  7. 根据权利要求5所述的方法,其中,所述从所述多个第二体素中,选取与所述第n个第一局部地图中每一第一体素相匹配的初始目标体素,包括:
    获取所述第n个第一局部地图相对于所述第二局部地图的第二坐标转换关系;
    根据所述第二坐标转换关系,对所述第n个第一局部地图中第j个第一体素的第一坐标进行坐标转换,得到所述第j个第一体素的第三坐标,j为大于0的整数;
    将所述第三坐标与所述多个第二体素的第二坐标进行匹配,得出与所述第j个第一体素相匹配的初始目标体素。
  8. 根据权利要求5所述的方法,其中,所述根据所述第一坐标转换关系、所述第n个第一局部地图中每一第一体素的第一坐标和对应的初始目标体素的第二坐标,确定匹配误差,包括:
    根据所述第一坐标转换关系,将所述第n个第一局部地图中第j个第一体素的第一坐标进行坐标转换,得到所述第j个第一体素的第四坐标,j为大于0的整数;
    根据所述第n个第一局部地图中每一第一体素的第四坐标和对应的初始目标体素的第二坐标,确定所述匹配误差。
  9. 根据权利要求8所述的方法,其中,所述根据所述第n个第一局部地图中每一第一体素的第四坐标和对应的初始目标体素的第二坐标,确定所述匹配误差,包括:
    确定所述第n个第一局部地图中每一第一体素的第四坐标与对应的初始目标体素的第二坐标之间的第一距离;
    根据每一所述第一距离,确定所述匹配误差。
  10. 根据权利要求7所述的方法,其中,如果所述匹配误差大于第一阈值,重新选取初始目标体素,包括:
    如果所述匹配误差大于所述第一阈值,将所述第一坐标转换关系作为所述第二坐标转换关系,重新选取初始目标体素。
  11. 根据权利要求5所述的方法,其中,所述将所述目标局部地图融合至所述第二局部地图中,得到融合后的第二局部地图,包括:
    在所述匹配结果表征匹配成功的情况下,根据确定的匹配误差小于或等于所述第一阈值时的第一坐标转换关系,对所述目标局部地图中每一第一体素的第一坐标进行坐标转换,得到与所述第一体素对应的第五坐标;
    根据每一所述第一体素的第五坐标和所述第二局部地图中每一第二体素的第二坐标,对所述目标局部地图和所述第二局部地图进行融合,得到融合后的第二局部地图。
  12. 根据权利要求11所述的方法,其中,所述根据每一所述第一体素的第五坐标和所述第二局部地图中每一第二体素的第二坐标,对所述目标局部地图和所述第二局部地图进行融合,得到融合后的第二局部地图,包括:
    确定所述目标局部地图中第k个第一体素的第五坐标分别与所述第二局部地图中每一所述第二体素的第二坐标之间的第二距离,以得到第二距离集合;
    在所述第二距离集合中存在满足第二条件的目标第二距离的情况下,根据所述第k个第一体素的第一坐标和第五坐标,更新所述目标第二距离对应的目标第二体素的第二坐标,k为大于0的整数;
    在所述第二距离集合中没有满足所述第二条件的目标第二距离的情况下,将所述第k个第一体素作为所述第二局部地图中的新的第二体素,将所述第k个第一体素的第五坐标作为 所述新的第二体素的第二坐标;
    重复上述步骤,以将所述目标局部地图中每一第一体素的第五坐标融合至所述第二局部地图中,从而得到融合后的第二局部地图。
  13. 根据权利要求12所述的方法,其中,所述根据所述第k个第一体素的第一坐标和第五坐标,更新所述目标第二距离对应的目标第二体素的第二坐标,包括:
    获取与所述目标第二体素对应的第一距离模型;
    获取所述目标第二体素到物体表面的历史第三距离;
    将所述第k个第一体素的第一坐标的Z轴坐标值、所述第k个第一体素的第五坐标的Z轴坐标值和所述历史第三距离,输入至所述第一距离模型中,以更新所述历史第三距离,得到更新后的第三距离;
    将所述更新后的第三距离,更新为所述目标第二体素的第二坐标的Z轴坐标值。
  14. 根据权利要求1至13任一项所述的方法,所述第二局部地图的构建过程包括:
    对特定物理空间的尺寸进行量化处理,得到多个所述第二体素的初始坐标;
    根据图像采集模组在所述特定物理空间中采集的多个所述样本图像对,对每一所述第二体素的初始坐标进行更新,得到每一所述第二体素的第二坐标;
    根据每一所述第二体素的第二坐标,构建所述第二局部地图。
  15. 根据权利要求14所述的方法,其中,所述根据图像采集模组在所述特定物理空间中采集的多个所述样本图像对,对每一所述第二体素的初始坐标进行更新,得到每一所述第二体素的第二坐标,包括:
    控制所述图像采集模组按照预设帧率采集所述样本图像对;
    根据所述图像采集模组在当前时刻采集的第一样本图像对和在历史时刻采集的第二样本图像对,更新每一所述第二体素的初始坐标;
    根据所述第一样本图像对和所述图像采集模组在下一时刻采集的第三样本图像对,继续更新每一所述第二体素的当前坐标,直到样本图像采集结束时,将所述第二体素的当前坐标作为所述第二坐标。
  16. 根据权利要求15所述的方法,其中,所述根据所述图像采集模组在当前时刻采集的第一样本图像对和在历史时刻采集的第二样本图像对,更新每一所述第二体素的初始坐标,包括:
    根据所述第一样本图像对和所述第二样本图像对,确定每一所述第二体素的当前相机坐标;
    从所述第一样本图像对的深度样本图像中,获取与每一所述第二体素的当前像素坐标对应的深度值;
    根据每一所述第二体素的当前相机坐标和与每一所述第二体素的当前像素坐标对应的深度值,更新与所述第二体素对应的初始坐标。
  17. 根据权利要求16所述的方法,其中,所述根据每一所述第二体素的当前相机坐标和与每一所述第二体素的当前像素坐标对应的深度值,更新与所述第二体素对应的初始坐标,包括:
    获取所述多个第二体素中第m个第二体素对应的第二距离模型;
    获取所述第m个第二体素到物体表面的历史第四距离;
    将所述第m个第二体素的当前相机坐标的Z轴坐标值、与所述第m个第二体素的当前像素坐标对应的深度值和所述历史第四距离,输入至所述第二距离模型中,以更新所述历史第四距离,得到更新后的第四距离;
    将每一所述第二体素对应的更新后的第四距离,更新为与所述第二体素对应的初始坐标中的Z轴坐标值,以实现对与所述第二体素对应的初始坐标进行更新。
  18. 根据权利要求16所述的方法,其中,所述根据所述第一样本图像对和所述第二样本图像对,确定每一所述第二体素的当前相机坐标,包括:
    根据所述第一样本图像对和所述第二样本图像对,确定相机坐标系相对于所述第二局部地图所在的坐标系的当前转换关系;
    根据所述当前转换关系,将每一所述第二体素的初始坐标转换为当前相机坐标。
  19. 根据权利要求16所述的方法,其中,所述从所述第一样本图像对的深度样本图像中,获取与每一所述第二体素的当前像素坐标对应的深度值,包括:
    根据所述图像采集模组的内参矩阵,将每一所述第二体素的当前相机坐标转换为当前像素坐标;
    从所述第一样本图像对的深度样本图像中,获取与每一所述第二体素的当前像素坐标对应的深度值。
  20. 一种地图融合装置,包括:
    地图获取模块,配置为获取第一局部地图集合和第二局部地图,所述第一局部地图集合包括至少一个第一局部地图,所述第一局部地图的坐标系与所述第二局部地图的坐标系不同;
    地图融合模块,配置为根据每一所述第一局部地图中第一体素的第一坐标和所述第二局部地图中第二体素的第二坐标,将每一所述第一局部地图和所述第二局部地图进行融合,以得到目标全局地图;
    其中,所述第二体素的第二坐标是根据多个样本图像对,更新所述第二体素的初始坐标而获得的,所述样本图像对包括二维样本图像和深度样本图像。
  21. 一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至19任一项所述地图融合方法中的步骤。
  22. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至19任一项所述地图融合方法中的步骤。
PCT/CN2020/116930 2019-09-27 2020-09-22 地图融合方法及装置、设备、存储介质 WO2021057745A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910922127.1 2019-09-27
CN201910922127.1A CN110704562B (zh) 2019-09-27 2019-09-27 地图融合方法及装置、设备、存储介质

Publications (1)

Publication Number Publication Date
WO2021057745A1 true WO2021057745A1 (zh) 2021-04-01

Family

ID=69197716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116930 WO2021057745A1 (zh) 2019-09-27 2020-09-22 地图融合方法及装置、设备、存储介质

Country Status (2)

Country Link
CN (1) CN110704562B (zh)
WO (1) WO2021057745A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506459A (zh) * 2021-06-11 2021-10-15 上海追势科技有限公司 一种地下停车场众包地图采集方法
CN113866758A (zh) * 2021-10-08 2021-12-31 深圳清航智行科技有限公司 一种场面监视方法、系统、装置及可读存储介质
CN116228925A (zh) * 2023-05-04 2023-06-06 北京集度科技有限公司 地图生成方法、装置及计算机设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704562B (zh) * 2019-09-27 2022-07-19 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质
CN111415388B (zh) * 2020-03-17 2023-10-24 Oppo广东移动通信有限公司 一种视觉定位方法及终端
CN111667545B (zh) * 2020-05-07 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 高精度地图生成方法、装置、电子设备及存储介质
CN112130567A (zh) * 2020-09-22 2020-12-25 广州小鹏自动驾驶科技有限公司 一种数据处理方法和装置
CN114579679A (zh) * 2020-12-01 2022-06-03 中移(成都)信息通信科技有限公司 空间定位数据融合方法、系统、设备及计算机存储介质
CN113900435B (zh) * 2021-08-31 2022-09-27 深圳蓝因机器人科技有限公司 基于双摄像头的移动机器人避障方法、设备、介质及产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049581A1 (zh) * 2016-09-14 2018-03-22 浙江大学 一种同时定位与地图构建方法
CN109086277A (zh) * 2017-06-13 2018-12-25 纵目科技(上海)股份有限公司 一种重叠区构建地图方法、系统、移动终端及存储介质
CN109559277A (zh) * 2018-11-28 2019-04-02 中国人民解放军国防科技大学 一种面向数据共享的多无人机协同地图构建方法
CN110704562A (zh) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质
CN110704563A (zh) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140769A1 (en) * 2014-11-17 2016-05-19 Qualcomm Incorporated Edge-aware volumetric depth map fusion
CN105865462B (zh) * 2015-01-19 2019-08-06 北京雷动云合智能技术有限公司 带有深度增强视觉传感器的基于事件的三维slam方法
US10360718B2 (en) * 2015-08-14 2019-07-23 Samsung Electronics Co., Ltd. Method and apparatus for constructing three dimensional model of object
CN109425348B (zh) * 2017-08-23 2023-04-07 北京图森未来科技有限公司 一种同时定位与建图的方法和装置
CN108509974B (zh) * 2018-01-26 2019-09-06 北京三快在线科技有限公司 地图数据融合方法、装置、电子设备及存储介质
CN109961506B (zh) * 2019-03-13 2023-05-02 东南大学 一种融合改进Census图的局部场景三维重建方法
CN110118554B (zh) * 2019-05-16 2021-07-16 达闼机器人有限公司 基于视觉惯性的slam方法、装置、存储介质和设备
CN110208802B (zh) * 2019-05-16 2021-04-30 四川省客车制造有限责任公司 融合多视图模糊推理赋值的障碍物检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049581A1 (zh) * 2016-09-14 2018-03-22 浙江大学 一种同时定位与地图构建方法
CN109086277A (zh) * 2017-06-13 2018-12-25 纵目科技(上海)股份有限公司 一种重叠区构建地图方法、系统、移动终端及存储介质
CN109559277A (zh) * 2018-11-28 2019-04-02 中国人民解放军国防科技大学 一种面向数据共享的多无人机协同地图构建方法
CN110704562A (zh) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质
CN110704563A (zh) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 地图融合方法及装置、设备、存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DANPING ZOU ; PING TAN: "CoSLAM: Collaborative Visual SLAM in Dynamic Environments", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY., USA, vol. 35, no. 2, 1 February 2013 (2013-02-01), USA, pages 354 - 366, XP011490796, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2012.104 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506459A (zh) * 2021-06-11 2021-10-15 上海追势科技有限公司 一种地下停车场众包地图采集方法
CN113506459B (zh) * 2021-06-11 2023-03-28 上海追势科技有限公司 一种地下停车场众包地图采集方法
CN113866758A (zh) * 2021-10-08 2021-12-31 深圳清航智行科技有限公司 一种场面监视方法、系统、装置及可读存储介质
CN113866758B (zh) * 2021-10-08 2023-05-26 深圳清航智行科技有限公司 一种场面监视方法、系统、装置及可读存储介质
CN116228925A (zh) * 2023-05-04 2023-06-06 北京集度科技有限公司 地图生成方法、装置及计算机设备
CN116228925B (zh) * 2023-05-04 2023-08-08 北京集度科技有限公司 地图生成方法、装置及计算机设备

Also Published As

Publication number Publication date
CN110704562B (zh) 2022-07-19
CN110704562A (zh) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2021057745A1 (zh) 地图融合方法及装置、设备、存储介质
WO2021057739A1 (zh) 定位方法及装置、设备、存储介质
WO2021057743A1 (zh) 地图融合方法及装置、设备、存储介质
WO2021057744A1 (zh) 定位方法及装置、设备、存储介质
JP6745328B2 (ja) 点群データを復旧するための方法及び装置
WO2021057742A1 (zh) 定位方法及装置、设备、存储介质
CN109791697B (zh) 使用统计模型从图像数据预测深度
CN110675457B (zh) 定位方法及装置、设备、存储介质
US11127189B2 (en) 3D skeleton reconstruction from images using volumic probability data
WO2022121640A1 (zh) 机器人重定位方法、装置、机器人和可读存储介质
WO2020063139A1 (zh) 脸部建模方法、装置、电子设备和计算机可读介质
JP7453470B2 (ja) 3次元再構成及び関連インタラクション、測定方法及び関連装置、機器
CN111627065A (zh) 一种视觉定位方法及装置、存储介质
KR20180026400A (ko) 3-차원 공간 모델링
CN112561978B (zh) 深度估计网络的训练方法、图像的深度估计方法、设备
CN115953535A (zh) 三维重建方法、装置、计算设备和存储介质
WO2023151251A1 (zh) 地图构建方法、位姿确定方法、装置、设备及计算机程序产品
GB2566443A (en) Cross-source point cloud registration
WO2022126921A1 (zh) 全景图片的检测方法、装置、终端及存储介质
CN117726747A (zh) 补全弱纹理场景的三维重建方法、装置、存储介质和设备
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
CN111179327A (zh) 一种深度图的计算方法
Teng et al. 360BEV: Panoramic Semantic Mapping for Indoor Bird's-Eye View
CN112767538B (zh) 三维重建及相关交互、测量方法和相关装置、设备
GB2573172A (en) 3D skeleton reconstruction with 2D processing reducing 3D processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870374

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870374

Country of ref document: EP

Kind code of ref document: A1