CN110415174A - Map amalgamation method, electronic equipment and storage medium - Google Patents

Map amalgamation method, electronic equipment and storage medium Download PDF

Info

Publication number
CN110415174A
CN110415174A CN201910700973.9A CN201910700973A CN110415174A CN 110415174 A CN110415174 A CN 110415174A CN 201910700973 A CN201910700973 A CN 201910700973A CN 110415174 A CN110415174 A CN 110415174A
Authority
CN
China
Prior art keywords
map
image data
maps
space
overlapping area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910700973.9A
Other languages
Chinese (zh)
Other versions
CN110415174B (en
Inventor
王超鹏
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
As Science And Technology (beijing) Co Ltd
Original Assignee
As Science And Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by As Science And Technology (beijing) Co Ltd filed Critical As Science And Technology (beijing) Co Ltd
Priority to CN201910700973.9A priority Critical patent/CN110415174B/en
Publication of CN110415174A publication Critical patent/CN110415174A/en
Application granted granted Critical
Publication of CN110415174B publication Critical patent/CN110415174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The present embodiments relate to field of image processing, a kind of map amalgamation method, electronic equipment and storage medium are disclosed.In the section Example of the application, map amalgamation method is applied to electronic equipment, comprising the following steps: obtains N number of map of the same space;N is the integer greater than 1;Judge between two neighboring map with the presence or absence of overlapping region;If it is determined that being, there are the map datums of the overlapping region in any one map in the two neighboring map of overlapping region for removal;If it is determined that not being the map datum for retaining two neighboring map;Remaining map datum in each map is merged, the fusion map in space is obtained.In the realization, reduce the case where positioning result jumps.

Description

Map fusion method, electronic device and storage medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a map fusion method, electronic equipment and a storage medium.
Background
Currently, technicians can use visual simultaneous localization and mapping (VSLAM) technology for robotic or pedestrian navigation. The VSLAM technology mainly acquires an environment view through a camera, carries out corresponding processing, extracts characteristic points to be matched with the prior information of a known map, and acquires pose information. The known map prior information mainly refers to map information established by VSLAM, the VSLAM map establishing effect is influenced by the surrounding environment, if the environment feature points and the texture information are rich enough, continuous map establishing can be carried out, and a section of continuous map data can be obtained; if the camera moves more violently, the environmental illumination changes more greatly or the characteristic points are sparse, VSLAM image construction is interrupted, and finally multi-section map data are obtained.
However, the inventors found that at least the following problems exist in the prior art: when the multi-section map is used for positioning, the problem of jumping of a positioning result possibly exists, and the navigation result of pedestrians or robots is influenced.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of embodiments of the present invention is to provide a map fusion method, an electronic device, and a storage medium, which enable reduction of a situation of positioning result jump.
In order to solve the above technical problem, an embodiment of the present invention provides a map fusion method applied to an electronic device, including the following steps: acquiring N maps of the same space; n is an integer greater than 1; judging whether an overlapping area exists between two adjacent maps or not; if so, removing the map data of the overlapping area in any one of the two adjacent maps with the overlapping area; if not, retaining the map data of two adjacent maps; and fusing the rest map data in each map to obtain a spatial fusion map.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the map fusion method according to the above embodiments.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the map fusion method mentioned in the above embodiments.
Compared with the prior art, the method and the device have the advantages that the map data of the maps are optimized before the maps in the fusion space are fused based on whether the overlapping areas exist among the maps, the map data of the overlapping area of any one of the maps are removed, the situation that the same map point has two different pieces of position information in the fusion map is reduced, and the situation that the positioning result jumps is further reduced.
In addition, the two adjacent maps are an mth map and a tth map, m and t are positive integers not larger than N, and m is not equal to t; judging whether an overlapping area exists between two adjacent maps, specifically comprising: acquiring a first image data set of a space corresponding to the mth map; sequentially inputting first image data in the first image data set into a first positioning model until the first positioning model is successfully positioned based on the input first image data; the first positioning model is used for positioning according to input first image data and map data of an m-th map; inputting the first image data successfully positioned into a second positioning model, and judging whether the second positioning model is successfully positioned; the second positioning model is used for positioning according to the input image data and the map data of the t-th map; if yes, determining that the m-th map and the t-th map have an overlapping area. In the implementation, whether the mth map and the tth map are overlapped is judged by judging whether the mth map and the tth map are positioned successfully or not.
In addition, the N maps are arranged according to the image building time sequence, and the path direction when the electronic equipment collects the first image data set is the same as the path direction when the maps are built; the constraint relation of m and t is as follows: when m is more than 0 and less than N, t is m +1, and when m is N, t is 1; the first image data in the first image data set are arranged in reverse according to the shooting order; or the constraint relation between m and t is as follows: when m is more than 1 and less than or equal to N, t is m-1, and when m is 1, t is N; the first image data in the first image data set is arranged in the forward direction in the shooting order. In the implementation, the test is started from the area with high overlapping probability, and under the condition of overlapping, the test speed is increased, the fusion speed is increased, and the calculation amount and the power consumption of the electronic equipment are reduced.
In addition, the removing of the map data of the overlapping area in any one of the two adjacent maps with the overlapping area specifically includes: acquiring a coordinate index of a map point positioned by the second positioning model; determining the coordinate index of the map point overlapped in the t-th map according to the coordinate index of the map point obtained by positioning; and deleting the coordinate index of the overlapped map point in the t-th map and the position information of the map point corresponding to the coordinate index of the overlapped map point in the t-th map. This enables optimization of the map data.
In addition, after judging whether an overlapping area exists between two adjacent maps, the map fusion method further includes the following steps before fusing the remaining map data in each map to obtain a spatial fusion map: judging whether an overlapping area exists between any two maps or not; if yes, removing the map data of the overlapping area in any one of the two maps with the overlapping area; if not, executing a step of fusing the remaining map data in each map to obtain a spatial fusion map. In this implementation, the accuracy of the overlap region detection is improved.
In addition, acquiring N maps of a space specifically includes: acquiring a second image dataset of the space; and establishing N maps of the space by an instant positioning and mapping technology according to the acquired second image data set of the space.
In addition, second images in the second image data set are placed according to the collecting sequence; according to the second image data set of the collected space, through the instant positioning and map building technology, N maps of the space are built, and the method specifically comprises the following steps: let i equal to 1, k equal to 1; reading the ith second image data; according to the ith second image data, drawing is carried out through an instant positioning and drawing technology; judging whether the established map is interrupted; if yes, taking the map established this time as a kth map, and storing the kth map; judging whether the second image data set is read completely; if yes, finishing building the graph; otherwise, let i equal to i +1, k equal to k +1, and re-execute the step of reading the ith second image data; if not, judging whether the second image data set is completely read; if yes, taking the map established this time as a kth map, saving the kth map, and finishing map establishment; otherwise, let i equal to i +1, re-execute the step of reading the ith second image data.
In addition, the first image data set of the space corresponding to the map is the second image data set, or the first image data set of the space corresponding to the map is composed of the second image data set used for building the map. In this implementation, the amount of data acquisition is reduced.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of a map fusion method according to a first embodiment of the present invention;
FIG. 2 is a flow chart diagram of a method of creating a map in accordance with a first embodiment of the present invention;
FIG. 3 is a flow chart of a map fusion method according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a map data optimization process of a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a map fusion apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic configuration diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The first embodiment of the present invention relates to a map fusion method, which is applied to an electronic device, which may be various terminals or servers, such as a robot. As shown in fig. 1, the map fusion method includes:
step 101: n maps of the same space are obtained.
Specifically, N are integers greater than 1. The N maps of the same space may be a multi-segment map created from a second image dataset acquired by the robot during one travel. In the process of the robot, if the camera moves violently, the change of the environmental illumination is large or the characteristic points are sparse, and the VSLAM technology is used for building a map, so that a 'break' phenomenon occurs, and a multi-section map is formed.
In one example, an electronic device acquires a second image dataset of a space; and establishing N maps of the space by an instant positioning and mapping technology according to the acquired second image data set of the space.
It should be noted that, as can be understood by those skilled in the art, in practical applications, the electronic device may also acquire a map stored in another device to perform the map fusion operation mentioned in the present embodiment, and the present embodiment does not limit the source of the map.
In one example, the second images in the second image dataset are placed in a chronological order of acquisition. The second image data set comprises mainly spatial image data and orientation sensor data. The spatial image data mainly comprises each frame of image information and corresponding time stamp information. The orientation sensor information may be used to assist in mapping, and may be odometer information or Inertial Measurement Unit (IMU) information. The odometer information mainly includes physical output, euler angles and time stamp information corresponding thereto. The IMU information mainly includes acceleration, angular velocity, timestamp information corresponding to each frame of data, and the like. The electronic equipment processes the acquired second image data by using a vSLAM technology, and acquires a timestamp, a physical output and Euler angle information corresponding to each key frame. In the process of establishing the map, if visual information is lost (namely, the illumination change is large, the movement is violent or the feature points of the environment are sparse), the existing vSLAM map data are saved, and initialization is carried out again to establish a new map. The process of creating the map by the electronic device is shown in fig. 2 and comprises the following steps:
step 201: a second image dataset of the space is acquired.
Specifically, the electronic device may travel around a preset trajectory for one revolution, and during the travel, take a second image data of the space, record the image data of the space in the travel and the direction sensor data, so as to be used for creating the map.
Step 202: let i equal 1 and k equal 1.
Step 203: the ith second image data is read.
Specifically, the electronic device sequentially reads information such as feature points and descriptors in the second image data acquired during operation, so as to create a map of the space from the second image data.
Step 204: and according to the ith second image data, drawing by an instant positioning and drawing technology.
Step 205: and judging whether the built map is interrupted or not.
Specifically, if the map is built up and interrupted, that is, the visual tracking is failed in the process of building up the map, step 206 is executed, otherwise, step 209 is executed. Since the VSLAM mapping stability is affected by the ambient environment and the motion state of the camera, if the ambient light changes dramatically, the texture of a weak area (white wall, etc.) or the camera moves vigorously (fast rotation, etc.), the visual tracking may fail, i.e., "break".
Step 206: and taking the map established this time as a kth map, and storing the kth map.
Specifically, if the visual tracking fails, the electronic device creates a map based on the first frame second image data to the ith second image data after the last visual tracking.
Step 207: and judging whether the second image data set is read completely.
Specifically, if the electronic device has performed mapping based on all the second image data, that is, the reading of the second image data set is completed, the mapping process is ended, otherwise, step 208 is executed.
Step 208: let i ═ i +1, k ═ k + 1. Step 203 is then performed.
Specifically, the electronic device continues to build the next map using the VSLAM technique based on the remaining second image data.
Step 209: and judging whether the second image data set is read completely.
If yes, go to step 210, otherwise, go to step 211.
Step 210: and taking the map established this time as a kth map, and storing the kth map. The flow is then ended.
Step 211: let i equal i + 1. Step 203 is then performed.
By performing the above steps, the electronic device may create N maps of the space.
Step 102: and judging whether an overlapping area exists between two adjacent maps.
Specifically, if there is an overlapping area between two adjacent maps, step 103 is executed, otherwise, step 104 is executed.
In one example, the N maps of the created space are arranged in chronological order of creation. When the map is created by the instant positioning and mapping technology, the electronic equipment maps according to the sequence of the timestamps of the acquired second image data, and when the N spatial maps are arranged according to the created time sequence, the two adjacent maps are the m-1 th map and the m-th map. m is an integer greater than 1.
Step 103: map data of an overlapping area in any one of two adjacent maps in which the overlapping area exists is removed. Step 105 is then performed.
Specifically, in the process of using the VSLAM technology to construct the map, some errors exist in the tracked pose. With the continuous extension of the path, the error of the previous frame is transmitted all the way to the back, so that the error of the pose of the last frame in the world coordinate system may be very large. In addition to adjusting the pose locally and globally using optimization methods, loop detection (loopclosure) can also be used to optimize the pose. In order to realize loop detection, repeated path area data is often required to be acquired in the data acquisition process so as to improve the loop detection probability in the VSLAM mapping process. Therefore, the map often includes repeated paths. In addition, in some spaces, in order to implement global mapping and positioning of the space, some paths are inevitably required to be repeated in the motion trajectory, for example, when the path in the space is in a shape of "tian". When the map data of a plurality of maps are mapped to the unified map coordinate system one by one, the integrity and the continuity of the map of the space can be kept, the positioning of a robot or a pedestrian is facilitated, and the map fusion can be carried out. However, when map fusion is performed, since there may be an overlapping route area between maps, if no processing is performed, a plurality of maps are directly fused, and when positioning is performed using the fused map, there are a series of problems:
(1) because two adjacent maps may have a repeated path area, all the track data of each map are directly used for fusion, and the same point is mapped into different coordinates in the map in a physical space, thereby influencing the positioning accuracy and the navigation effect.
(2) The VSLAM positioning is mainly carried out by extracting environmental characteristic information and prior map information to match to obtain the camera pose information. When the camera is positioned near the 'interruption' position of the map building, the position and attitude information of the camera can be obtained by using the maps obtained before and after the interruption position, and the difference exists between the two maps, so that the jump of the positioning result in the area is caused, and the navigation result of pedestrians or robots is influenced.
In the embodiment, before the map fusion of the space is performed, the electronic device judges whether an overlapping area exists between two adjacent maps, and if the overlapping area exists, the map data of the overlapping area of one map is removed, so that the situation that two different pieces of position information exist in the same map point in the fused map is reduced, and the problem of jump of a positioning result is further reduced.
Step 104: the map data of two adjacent maps are retained.
Specifically, if there is no overlapping area between two adjacent maps, when the two maps are mapped to the same map, there is no case where two pieces of position information are located at the same position, and therefore, the map data of the two maps can be retained.
Step 105: and fusing the rest map data in each map to obtain a spatial fusion map.
Specifically, the remaining map data in each map are mapped to the same coordinate system, i.e., the same map, to obtain a fusion map.
In one example, the map of the space is a visual map (vSLAM map), and the process of fusing the remaining map data in each map includes the sub-steps of:
for each map, the following operations are performed:
step 1051: and acquiring the direction sensor data corresponding to each key frame of the map according to the timestamp alignment mode. Because the frame rate of data acquired by the direction sensor is far greater than the key frame rate in the vSLAM map, the data output by the direction sensor is more compact. Due to the fact that an 'interruption' condition possibly exists in the process of visual map building, a direction sensor data serial number corresponding to each key frame of the vSLAM map, a direction sensor data serial number corresponding to the starting frame of the vSLAM map and a direction sensor serial number corresponding to the ending key frame of the vSLAM map are mainly obtained.
Step 1052: vSLAM angle replacement. Due to the possible "break" or no loop detection during vSLAM mapping, the visual output is erroneous, i.e. there is an error in the physical size and orientation. If an interruption situation occurs in the process of drawing establishment, due to the fact that the vSLAM has certain relativity, after interruption and reinitialization, the pose is recalculated, and discontinuity in the direction is caused. Due to the continuity and stability of the data acquisition of the direction sensor, the data acquired by the direction sensor can be used for replacing the yaw angle acquired by vSLAM mapping, so that the error in the direction under the condition of no loop can be reduced, and the continuity of the direction in the vSLAM mapping process is ensured.
Step 1053: determining coordinates of a starting key frame of a map in a fusion map; and calculating the coordinates of each key frame of the map in the fusion map according to the coordinates of the initial key frame of the map in the fusion map and the coordinates of the direction sensor data corresponding to each key frame of the map in the fusion map.
Specifically, the coordinates corresponding to the starting key frame of the map are mapped to the fusion map, and the coordinates of the starting key frame in the fusion map are determined. If the map is the first map mapped to the fusion map, the coordinates corresponding to the initial key frame of the map can be determined directly according to the mapping result. If the map is the kth map mapped to the fusion map, calculating the coordinates of the initial key frame in the fusion map by using formula (1) and formula (2) under the condition of interruption:
x′=x+d*cos(θ′) (1)
y′=y+d*sin(θ′) (2)
wherein, (x, y) represents coordinates of a key frame of a last frame of a (k-1) th mapped map to the fusion map in the fusion map, d represents a distance between an end key frame of the (k-1) th mapped map and a start key frame of the (k) th mapped map, the distance is determined according to direction sensor data corresponding to the end key frame of the (k-1) th mapped map and direction sensor data of the start key frame of the (k) th mapped map, θ ' represents an angle of the map after direction correction, and (x ', y ') is coordinates of the start key frame of the (k) th mapped map in the fusion map.
After the coordinates of the initial key frame of the map in the fusion map are known, the coordinates of each key frame in the map in the fusion map can be calculated according to the direction sensor data corresponding to each key frame of the map by adopting the principles similar to the formula (1) and the formula (2).
In the process of drawing construction, an interruption phenomenon may occur, two sections of maps appear, and a situation that a map of a space is lost may exist between the two sections of maps, and in the situation, the situation is falseLet the coordinate corresponding to the last key frame of the first segment map be (x, y), in the missing map, the direction sensor has moved n segments of distance, and the distance between two previous and next frames of sensor data is d'0,d′1...d′nThe angle of the corrected map at each distance is theta'0,θ′1...θ′nThe coordinates corresponding to the directional sensor data at each distance can be determined. For a first movement distance d'0The coordinates of the orientation sensor are obtained by calculation using the following formula (3) and formula (4):
x′0=x+d′0*cos(θ′0) (3)
y′0=y+d′0*sin(θ′0) (4)
wherein (x, y) represents the coordinate corresponding to the last key frame in the first segment map, (x'0,y′0) Coordinates, d 'of a fusion map corresponding to the sensor data representing the first movement distance'0Denotes a first moving distance, θ'0Indicating the angle of the map after the first moving distance direction correction.
By analogy, the nth movement distance d'nThe coordinates of the orientation sensor are obtained by calculation using the following formula (5) and formula (6)
x′n=x′n-1+d′n*cos(θ′n) (5)
y′nyn-1+d′n*sin(θ′n) (6)
Wherein, (x'n-1,y′n-1) Represents coordinates corresponding to the n-1 th movement distance prescription sensor data, (x'n,y′n) Represents the coordinates, d ', corresponding to the n-th movement distance prescription sensor data'nDenotes the nth moving distance, θ'nAnd represents the angle of the nth movement distance direction corrected map.
Note that, in the case of a map with no loop, the map has a deviation in scale, and the coordinates of the map can be determined by equations (7), (8), and (9):
xw=xn+ε*d*cos(θ′) (7)
yw=yn+ε*d*sin(θ′) (8)
ε=dt/l (9)
wherein (x)n,yn) Representing the coordinates corresponding to the previous key frame in the map, (x)w,yw) Coordinates representing a fusion map, dtRepresenting the distance between two adjacent keyframes in a map, ε representing a distance scaling factor, dtIndicates the sensor movement distance, l indicates the visual movement distance, and θ' indicates the angle of the map after the movement distance direction correction.
It should be noted that, as can be understood by those skilled in the art, the electronic device may merge a plurality of maps into any one map, or may merge the remaining map data of the plurality of maps into a new map.
It should be noted that, as will be understood by those skilled in the art, in practical applications, the map fusion may also be implemented in other ways, which are not listed here.
In one example, after judging whether an overlapping area exists between two adjacent maps, the electronic device judges whether an overlapping area exists between any two maps before fusing the remaining map data in each map to obtain a fused map of a space; if yes, removing the map data of the overlapping area in any one of the two maps with the overlapping area; if not, executing a step of fusing the remaining map data in each map to obtain a spatial fusion map.
It is worth mentioning that the electronic device detects whether any map overlaps with all other maps, and deletes the map data of the overlapping area in any one of the two overlapped maps after the overlapping condition is found, so that the map data for obtaining the fusion map is further optimized, the condition that two pieces of position information exist in the same map point is further reduced, and the condition that the positioning result jumps is further reduced.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, the map fusion method provided by the embodiment optimizes the map data of the multiple maps based on whether the multiple maps have the overlapping area before the multiple maps in the fusion space, removes the map data of the overlapping area of any one of the multiple maps, reduces the situation that two different pieces of position information exist in the same map point in the fusion map, and further reduces the problem of jump of the positioning result.
A second embodiment of the present invention relates to a map fusion method. In this embodiment, the method for determining whether there is an overlapping area between two adjacent maps by the electronic device in the first embodiment is illustrated by taking two adjacent maps as an mth map and a tth map as an example.
Specifically, as shown in fig. 3, the present embodiment includes steps 301 to 309, where steps 301, 306, and 308 are substantially the same as steps 101, 103, and 105 of the first embodiment, respectively, and are not repeated herein. The following mainly introduces the differences:
step 301: n maps of the same space are obtained.
Step 302: a first image dataset of a space corresponding to the mth map is acquired.
Specifically, m, t are positive integers not greater than N, and m ≠ t.
In one example, the N maps are arranged in chronological order of map building, and the direction of the path when the electronic device collects the first image data set is the same as the direction of the path when the map is built. Since the t-th map is adjacent to the m-th map, the constraint relationship between t and m is as follows: t ═ m +1 or t ═ m-1.
Case a: when 0 < m < N, t is m +1, and when m is N, t is 1, that is, the tth map is a map subsequent to the mth map. In this case, the probability that the route of the second segment of the mth map and the route of the first segment of the tth map overlap is higher, and the electronic apparatus may arrange the first image data in the first image data set in reverse in the shooting order, that is, arrange the first image data shot earlier later.
Case B: when 1 < m ≦ N, t ═ m-1, and m ═ 1, t ═ N, that is, the tth map is the previous map to the mth map. In this case, the probability of overlapping the front-stage route of the mth map and the rear-stage route of the tth map is higher, and the electronic apparatus may arrange the first image data in the first image data set in the forward direction in the shooting order, that is, the earlier the first image data is shot, the earlier the first image data is arranged.
It is worth mentioning that the first image data corresponding to the path with higher probability is located and detected first, so that the overlapping area can be found more quickly under the condition that the overlapping area exists.
It should be noted that, as can be understood by those skilled in the art, in practical applications, a developer may specify whether any two of the N maps are in an adjacent relationship, input an instruction file including a relationship between the N maps into the electronic device, and the electronic device may determine the relationship between any two of the N maps according to the instruction file.
In one example, the first image data set of the space corresponding to the map is the second image data set, or the first image data set of the space corresponding to the map is composed of the second image data set used for building the map.
It is worth mentioning that the image data collected in the process of map building is used for testing whether two adjacent maps are overlapped, so that the data collection amount is reduced.
It should be noted that, as will be understood by those skilled in the art, in practical applications, the first image data set may also be an image data set of a space acquired again after the mapping is performed, and the relationship between the first image data set and the second image data set is not limited in the present embodiment.
Step 303: and sequentially inputting the first image data in the first image data set into the first positioning model until the first positioning model is successfully positioned based on the input first image data.
Specifically, the first positioning model is used for positioning based on the input first image data and map data of the m-th map. The electronic device sequentially reads the first image data in the first image data set, and uses the first image data as the input of the first positioning model, if the first positioning model is successfully positioned, the positioning information of the map point when the first image data is shot by using the mth map is described, at this time, it needs to be judged whether the map point can be positioned by using the tth map, so as to determine whether the first image data and the first image data have an overlapping area.
Step 304: and inputting the first image data successfully positioned into the second positioning model, and judging whether the second positioning model is successfully positioned.
Specifically, the second localization model is used for localization based on the input image data and the map data of the t-th map. If the second positioning model is successfully positioned, it is described that the position information of the map point at the time of capturing the first image data can be positioned by using the t-th map, and step 305 is executed. If the second positioning model fails, go to step 307.
It should be noted that, as will be appreciated by those skilled in the art, the first and second positioning models may be algorithmic models based on VSLAM techniques. The first positioning model and the second positioning model may be derived from the same model. For example, when the m-th map is loaded by the algorithm model based on the VSLAM technology, the algorithm model based on the VSLAM technology is the first positioning model, and when the t-th map is loaded by the algorithm model based on the VSLAM technology, the algorithm model based on the VSLAM technology is the second positioning model. In the present embodiment, in order to distinguish between different maps loaded, these are referred to as a first positioning model and a second positioning model, and do not represent that these two models are necessarily two completely unrelated models.
Step 305: and determining that the m-th map and the t-th map have an overlapping area.
Specifically, since the map point can be located by both the tth map and the mth map, it is described that the mth map and the tth first image have an overlapping area.
Step 306: map data of an overlapping area in any one of two adjacent maps in which the overlapping area exists is removed. Step 309 is then performed.
Specifically, since the tth map and the mth map can both locate the map point, if the tth map point and the mth map are directly merged into the same map, a jump occurs when the map point is located. The electronic equipment optimizes the map data of the t-th map or the map data of the m-th map, and removes the map data of the overlapping area of the t-th map or removes the map data of the overlapping area in the m-th map so as to reduce the jumping situation of the positioning result.
In one example, the map data of the map includes location information of map points and coordinate indexes of the map points. The process of the electronic device removing the map data of the overlapping area is as follows: acquiring a coordinate index of a map point positioned by the second positioning model; determining the coordinate index of the map point overlapped in the t-th map according to the coordinate index of the map point obtained by positioning; and deleting the coordinate index of the overlapped map point in the t-th map and the position information of the map point corresponding to the coordinate index of the overlapped map point in the t-th map.
It should be noted that, in this embodiment, taking deleting the map data of the overlapping area in the t-th map as an example to illustrate the optimization process of the map data, in practical applications, the map data of the overlapping area in the m-th map may also be deleted, and the deletion method may refer to the above-mentioned related contents, which is not described herein again.
Step 307: it is determined that there is no overlapping area between the mth map and the tth map.
Specifically, if the second positioning model cannot perform positioning based on the first image data successfully positioned by the first positioning model, it indicates that map points corresponding to the first image data successfully positioned by the first positioning model do not overlap, and the probability of map points corresponding to the first image data arranged behind the first image data successfully positioned by the first positioning model is smaller.
It is worth mentioning that after the map point corresponding to the first image data with the highest overlapping probability is not overlapped, the test process of the subsequent first image data is finished, so that the calculation amount of the electronic device can be reduced, and the speed of fusing the map by the electronic device can be improved.
In an example, after the second positioning model cannot position the first image data successfully positioned based on the first positioning model, the electronic device may sequentially input the remaining first image data into the first positioning model until the first image data set is completely read, or find the first image data that enables the second positioning model to position successfully. And if the first image data which are successfully positioned by the second positioning model and the first positioning model exist, determining that an overlapping area exists, and if the first image data which are successfully positioned by the second positioning model and the first positioning model do not exist, determining that the first image data corresponding to the m-th map do not exist.
It is worth mentioning that the accuracy of judging whether the electronic device overlaps or not is improved by testing all the first image data.
Step 308: the map data of two adjacent maps are retained.
Specifically, if two adjacent maps are not overlapped, the map data of the two adjacent maps are all usable data, so that the map data of the two adjacent maps are reserved.
Step 309: and fusing the rest map data in each map to obtain a spatial fusion map.
The following describes, with reference to a scenario, a process of the electronic device optimizing N maps (a process of removing map data of an overlapping area).
It is assumed that the map created for a certain space includes N maps in total, and the number of first image data in the first image data set is M. The optimization process of the electronic device on the map data of the N maps comprises the following steps as shown in FIG. 4:
step 401: let m be N.
Step 402: the map data of the mth map is loaded.
Step 403: let i equal 1.
Step 404: and reading the ith first image data, and performing visual positioning based on the ith first image data. The electronic device performs positioning based on the ith first image data and the mth map.
Step 405: and judging whether the positioning is successful. If the positioning fails, go to step 406, and if the positioning succeeds, go to step 407.
Step 406: let i equal i +1, judge whether i is greater than M. If i is greater than M, go to step 415, if i is less than or equal to M, go to step 404.
Step 407: the frame number i of the first image data is recorded.
Step 408: it is judged whether m-1 is equal to 0. If yes, go to step 409, otherwise, go to step 410.
Step 409: and loading the map data of the Nth map. Step 411 is then performed.
Step 410: the map data of the m-1 th map is loaded.
Step 411: visual positioning is performed based on the ith first image data. Specifically, the electronic device performs positioning based on the ith first image data and the loaded map.
Step 412: and judging whether the positioning is successful. If the positioning fails, go to step 413, and if the positioning succeeds, go to step 414.
Step 413: the map data of the m-1 th map is retained. Step 416 is then performed.
Step 414: the coordinate index u of the located map point is recorded.
Step 415: and keeping the relevant data of the map points with the coordinate indexes of 0-U in the map data of the m-1 th map, and deleting the relevant data of the map points with the coordinate indexes of U-U. And U is the total number of coordinate indexes of the (m-1) th map, and the related data comprises position information.
Step 416: and judging whether m is equal to 1, if so, ending the process, and if not, executing the step 417.
Step 417: let m be m-1. Step 402 is then performed.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, the map fusion method provided by the embodiment optimizes the map data of the multiple maps based on whether the multiple maps have the overlapping area before the multiple maps in the fusion space, removes the map data of the overlapping area of any one of the multiple maps, reduces the situation that two different pieces of position information exist in the same map point in the fusion map, and further reduces the problem of jump of the positioning result.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to an electronic apparatus, as shown in fig. 5, including: an acquisition module 501, an optimization module 502 and a fusion module 503. The obtaining module 501 is configured to obtain N maps in the same space; n is an integer greater than 1. The optimization module 502 is configured to determine whether an overlapping area exists between two adjacent maps; if so, removing the map data of the overlapping area in any one of the two adjacent maps with the overlapping area; if not, the map data of two adjacent maps are reserved. The fusion module 503 is configured to fuse remaining map data in each map to obtain a spatial fusion map.
It should be understood that this embodiment is a system example corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fourth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 6, including: at least one processor 601; and a memory 602 communicatively coupled to the at least one processor 601; the memory 602 stores instructions executable by the at least one processor 601, and the instructions are executed by the at least one processor 601 to enable the at least one processor 601 to execute the map fusion method according to the above embodiments.
The electronic device includes: one or more processors 601 and a memory 602, one processor 601 being illustrated in fig. 6. The processor 601 and the memory 602 may be connected by a bus or other means, and fig. 6 illustrates an example of a connection by a bus. The memory 602 is a non-volatile computer readable storage medium that can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as maps stored in the memory 602 in the embodiments of the present application. The processor 601 executes various functional applications and data processing of the device by running nonvolatile software programs, instructions, and modules stored in the memory 602, that is, implements the above-described map fusion method.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 602 may optionally include memory located remotely from the processor 601, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 602 and, when executed by the one or more processors 601, perform the map fusion method of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A map fusion method is applied to electronic equipment and comprises the following steps:
acquiring N maps of the same space; n is an integer greater than 1;
judging whether an overlapping area exists between two adjacent maps or not;
if so, removing the map data of the overlapping area in any one of the two adjacent maps with the overlapping area;
if not, retaining the map data of two adjacent maps;
and fusing the remaining map data in each map to obtain a fused map of the space.
2. The map fusion method according to claim 1, wherein the two adjacent maps are an mth map and a tth map, m and t are positive integers not greater than N, and m ≠ t;
the determining whether there is an overlapping area between two adjacent maps specifically includes:
acquiring a first image data set of the space corresponding to the mth map;
sequentially inputting first image data in the first image data set into a first positioning model until the first positioning model is successfully positioned based on the input first image data; the first positioning model is used for positioning according to input first image data and map data of the mth map;
inputting the first image data which is successfully positioned into a second positioning model, and judging whether the second positioning model is successfully positioned; the second positioning model is used for positioning according to input image data and map data of the tth map;
if yes, determining that the m-th map and the t-th map have an overlapping area.
3. The map fusion method according to claim 2, wherein the N maps are arranged in chronological order of map building, and a path direction when the electronic device collects the first image data set is the same as a path direction when the map is built;
the constraint relation of m and t is as follows: when m is more than 0 and less than N, t is m +1, and when m is N, t is 1; the first image data in the first image data set are arranged reversely according to the shooting sequence; or,
the constraint relation of m and t is as follows: when m is more than 1 and less than or equal to N, t is m-1, and when m is 1, t is N; the first image data in the first image data set is arranged in a forward direction in a shooting order.
4. The map fusion method according to claim 2, wherein the removing of the map data of the overlapping area in any one of the two adjacent maps with the overlapping area specifically includes:
acquiring a coordinate index of a map point positioned by the second positioning model;
determining the coordinate index of the map point overlapped in the t-th map according to the coordinate index of the map point obtained by positioning;
and deleting the coordinate index of the overlapped map point in the t-th map and the position information of the map point corresponding to the coordinate index of the overlapped map point in the t-th map.
5. The map fusion method according to any one of claims 1 to 4, wherein after the determining whether there is an overlapping area between two adjacent maps, before the fusing remaining map data in each map to obtain a fused map of the space, the map fusion method further comprises:
judging whether an overlapping area exists between any two maps;
if yes, removing the map data of the overlapping area in any one of the two maps with the overlapping area;
and if not, executing the step of fusing the remaining map data in each map to obtain a fused map of the space.
6. The map fusion method according to claim 2, wherein the obtaining of the N maps of the space specifically includes:
acquiring a second image dataset of the space;
and establishing N maps of the space by an instant positioning and map establishing technology according to the acquired second image data set of the space.
7. The map fusion method of claim 6, wherein the second images in the second image dataset are placed in an order of acquisition;
the establishing of the N maps of the space by the instant positioning and mapping technique according to the acquired second image dataset of the space specifically includes:
let i equal to 1, k equal to 1;
reading the ith second image data;
according to the ith second image data, drawing is carried out through an instant positioning and drawing technology;
judging whether the established map is interrupted;
if yes, taking the map established this time as a kth map, and storing the kth map; judging whether the second image data set is read completely; if yes, finishing building the graph; otherwise, let i equal to i +1, k equal to k +1, and re-execute the step of reading the ith second image data;
if not, judging whether the second image data set is completely read; if yes, taking the map established this time as a kth map, saving the kth map, and finishing map establishment; otherwise, let i equal to i +1, re-execute the step of reading the ith second image data.
8. The map fusion method according to claim 6, wherein the first image data set of the space corresponding to the map is the second image data set, or the first image data set of the space corresponding to the map is composed of the second image data set used for building the map.
9. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the map fusion method of any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the map fusion method of any one of claims 1 to 8.
CN201910700973.9A 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium Active CN110415174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910700973.9A CN110415174B (en) 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910700973.9A CN110415174B (en) 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN110415174A true CN110415174A (en) 2019-11-05
CN110415174B CN110415174B (en) 2023-07-07

Family

ID=68364517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910700973.9A Active CN110415174B (en) 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN110415174B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110986969A (en) * 2019-11-27 2020-04-10 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN111652934A (en) * 2020-05-12 2020-09-11 Oppo广东移动通信有限公司 Positioning method, map construction method, device, equipment and storage medium
CN113029167A (en) * 2021-02-25 2021-06-25 深圳市朗驰欣创科技股份有限公司 Map data processing method, map data processing device and robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262482A1 (en) * 2011-04-14 2012-10-18 Aisin Aw Co., Ltd. Map image display system, map image display device, map image display method, and computer program
CN109029422A (en) * 2018-07-10 2018-12-18 北京木业邦科技有限公司 A kind of method and apparatus of the three-dimensional investigation map of multiple no-manned plane cooperation building
CN109073398A (en) * 2018-07-20 2018-12-21 深圳前海达闼云端智能科技有限公司 Map establishing method, positioning method, device, terminal and storage medium
CN109285117A (en) * 2018-09-05 2019-01-29 南京理工大学 A kind of more maps splicing blending algorithm based on map feature
CN109978755A (en) * 2019-03-11 2019-07-05 广州杰赛科技股份有限公司 Panoramic image synthesis method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262482A1 (en) * 2011-04-14 2012-10-18 Aisin Aw Co., Ltd. Map image display system, map image display device, map image display method, and computer program
CN109029422A (en) * 2018-07-10 2018-12-18 北京木业邦科技有限公司 A kind of method and apparatus of the three-dimensional investigation map of multiple no-manned plane cooperation building
CN109073398A (en) * 2018-07-20 2018-12-21 深圳前海达闼云端智能科技有限公司 Map establishing method, positioning method, device, terminal and storage medium
CN109285117A (en) * 2018-09-05 2019-01-29 南京理工大学 A kind of more maps splicing blending algorithm based on map feature
CN109978755A (en) * 2019-03-11 2019-07-05 广州杰赛科技股份有限公司 Panoramic image synthesis method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110986969A (en) * 2019-11-27 2020-04-10 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN111652934A (en) * 2020-05-12 2020-09-11 Oppo广东移动通信有限公司 Positioning method, map construction method, device, equipment and storage medium
CN113029167A (en) * 2021-02-25 2021-06-25 深圳市朗驰欣创科技股份有限公司 Map data processing method, map data processing device and robot

Also Published As

Publication number Publication date
CN110415174B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
JP7326720B2 (en) Mobile position estimation system and mobile position estimation method
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN111445526B (en) Method, device and storage medium for estimating pose of image frame
KR102266830B1 (en) Lane determination method, device and storage medium
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN114236552B (en) Repositioning method and repositioning system based on laser radar
US8437501B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN110415174A (en) Map amalgamation method, electronic equipment and storage medium
CN112734852A (en) Robot mapping method and device and computing equipment
CN109163722B (en) Humanoid robot path planning method and device
JP2021140822A (en) Vehicle control method, vehicle control device, and vehicle
CN109074638A (en) Fusion graph building method, related device and computer readable storage medium
US11948331B2 (en) Guided batching
CN111288971B (en) Visual positioning method and device
CN112150550B (en) Fusion positioning method and device
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
WO2013140133A1 (en) Generating navigation data
CN116030340A (en) Robot, positioning information determining method, device and storage medium
CN112348854A (en) Visual inertial mileage detection method based on deep learning
CN109074407A (en) Multi-source data mapping method, related device and computer-readable storage medium
CN113256736B (en) Multi-camera visual SLAM method based on observability optimization
CN112669196B (en) Method and equipment for optimizing data by factor graph in hardware acceleration engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant