CN110415174B - Map fusion method, electronic device and storage medium - Google Patents

Map fusion method, electronic device and storage medium Download PDF

Info

Publication number
CN110415174B
CN110415174B CN201910700973.9A CN201910700973A CN110415174B CN 110415174 B CN110415174 B CN 110415174B CN 201910700973 A CN201910700973 A CN 201910700973A CN 110415174 B CN110415174 B CN 110415174B
Authority
CN
China
Prior art keywords
map
image data
maps
space
overlapping area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910700973.9A
Other languages
Chinese (zh)
Other versions
CN110415174A (en
Inventor
王超鹏
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN201910700973.9A priority Critical patent/CN110415174B/en
Publication of CN110415174A publication Critical patent/CN110415174A/en
Application granted granted Critical
Publication of CN110415174B publication Critical patent/CN110415174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of image processing and discloses a map fusion method, electronic equipment and a storage medium. In some embodiments of the present application, a map fusion method is applied to an electronic device, and includes the following steps: acquiring N maps of the same space; n is an integer greater than 1; judging whether an overlapping area exists between two adjacent maps; if yes, removing the map data of the overlapping area in any one of the two adjacent maps with the overlapping area; if not, reserving map data of two adjacent maps; and fusing the remaining map data in each map to obtain a fused map of the space. In the implementation, the situation of jump of the positioning result is reduced.

Description

Map fusion method, electronic device and storage medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a map fusion method, electronic equipment and a storage medium.
Background
Currently, technicians may use visual real-time localization and mapping (visual simultaneous localization and mapping, VSLAM) techniques for robotic or pedestrian navigation. The VSLAM technology mainly collects environmental views through a camera, performs corresponding processing, extracts feature points and matches known map priori information, and obtains pose information. The known map priori information mainly refers to map information established by a VSLAM, the VSLAM map establishing effect is affected by surrounding environment, if the environment characteristic points and texture information are rich enough, a continuous map can be established, and a section of continuous map data can be obtained; if the camera moves more severely, the ambient light changes more greatly or the feature points are sparser, the VSLAM can break the drawing, and finally multi-section map data are obtained.
However, the inventors found that there are at least the following problems in the prior art: when the multi-section map is used for positioning, the problem of jump of the positioning result possibly exists, and the navigation result of pedestrians or robots is affected.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the invention aims to provide a map fusion method, electronic equipment and a storage medium, so that the situation of jump of a positioning result can be reduced.
In order to solve the technical problems, the embodiment of the invention provides a map fusion method, which is applied to electronic equipment and comprises the following steps: acquiring N maps of the same space; n is an integer greater than 1; judging whether an overlapping area exists between two adjacent maps; if yes, removing the map data of the overlapping area in any one of the two adjacent maps with the overlapping area; if not, reserving map data of two adjacent maps; and fusing the remaining map data in each map to obtain a fused map of the space.
The embodiment of the invention also provides electronic equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the map fusion method as referred to in the above embodiments.
The embodiment of the invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the map fusion method mentioned in the above embodiment.
Compared with the prior art, the embodiment of the invention optimizes the map data of the multiple maps based on whether the multiple maps have overlapping areas before fusing the multiple maps in the space, and removes the map data of the overlapping area of any one of the multiple maps, thereby reducing the situation that two different position information exist at the same map point in the fused map, and further reducing the situation of jump of the positioning result.
In addition, two adjacent maps are an mth map and a tth map, m and t are positive integers not more than N, and m is not equal to t; judging whether an overlapping area exists between two adjacent maps or not specifically comprises the following steps: acquiring a first image data set of a space corresponding to an mth map; sequentially inputting first image data in the first image data set into a first positioning model until the first positioning model is successfully positioned based on the input first image data; the first positioning model is used for positioning according to the input first image data and map data of an mth map; inputting the first image data successfully positioned into a second positioning model, and judging whether the second positioning model is successfully positioned; the second positioning model is used for positioning according to the input image data and map data of the t-th map; if yes, determining that an overlap area exists between the mth map and the t map. In the implementation, whether the mth map and the t map are overlapped is judged by whether the mth map and the t map are positioned successfully or not.
In addition, the N maps are arranged according to the time sequence of the map building, and the travel direction of the electronic equipment when the electronic equipment collects the first image data set is the same as the travel direction of the electronic equipment when the map is built; the constraint relation of m and t is as follows: when 0 < m < N, t=m+1, and when m=N, t=1; the first image data in the first image data set are reversely arranged according to the shooting sequence; alternatively, the constraint relationship of m and t is: when m is more than 1 and less than or equal to N, t=m-1, and when m=1, t=N; the first image data in the first image data set is arranged forward in the shooting order. In the implementation, the test is started from the region with high overlapping probability, and under the condition that the overlapping exists, the test speed is improved, the fusion speed is improved, and the calculated amount and the power consumption of the electronic equipment are reduced.
In addition, map data of an overlapping area in any one of two adjacent maps in which the overlapping area exists is removed, specifically including: acquiring a coordinate index of a map point obtained by positioning of the second positioning model; according to the coordinate index of the map points obtained by positioning, determining the coordinate index of the overlapped map points in the t-th map; and deleting the coordinate index of the overlapped map points in the t-th map and the position information of the map points corresponding to the coordinate index of the overlapped map points in the t-th map. This enables optimization of map data.
In addition, after judging whether an overlapping area exists between two adjacent maps, fusing the remaining map data in each map, and before obtaining a fused map of the space, the map fusion method further comprises the following steps: judging whether an overlapping area exists between any two maps; if yes, removing map data of an overlapping area in any one of two maps with the overlapping area; and if not, executing the step of fusing the remaining map data in each map to obtain a fused map of the space. In this implementation, the accuracy of the detection of the overlapping area is improved.
In addition, acquiring N maps of the space specifically includes: acquiring a second image dataset of the space; according to the acquired second image data set of the space, N maps of the space are established through the instant positioning and mapping technology.
In addition, the second images in the second image data set are placed according to the acquisition sequence; according to the collected second image data set of the space, through the instant positioning and mapping technology, N maps of the space are established, which concretely comprises the following steps: let i=1, k=1; reading the ith second image data; according to the ith second image data, performing image construction through an instant positioning and image construction technology; judging whether the established map is interrupted or not; if yes, taking the map established at this time as a kth map, and storing the kth map; judging whether the second image data set is read completely or not; if yes, ending the drawing construction; otherwise, let i=i+1, k=k+1, and re-execute the step of reading the i-th second image data; if not, judging whether the second image data set is read completely; if yes, taking the map established at this time as a kth map, storing the kth map, and ending the map establishment; otherwise, let i=i+1, the step of reading the ith second image data is re-executed.
In addition, the first image data set of the space corresponding to the map is the second image data set, or the first image data set of the space corresponding to the map is composed of the second image data set used for building the map. In this implementation, the amount of data collection is reduced.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a flowchart of a map fusion method of a first embodiment of the present invention;
fig. 2 is a flow chart of a method of creating a map according to a first embodiment of the present invention;
fig. 3 is a flowchart of a map fusion method according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a map data optimization process of a second embodiment of the present invention;
fig. 5 is a schematic structural view of a map fusion apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic structural view of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present invention, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments.
The first embodiment of the present invention relates to a map fusion method, which is applied to an electronic device, which may be various terminals or servers, for example, a robot, etc. As shown in fig. 1, the map fusion method includes:
step 101: n maps of the same space are acquired.
Specifically, N is an integer greater than 1. The N maps of the same space may be multi-segment maps created from a second image dataset acquired by the robot during one travel. Because the robot is in progress, if the camera moves more severely, the environment illumination changes more greatly or the feature points are more sparse, the phenomenon of 'interruption' can occur when the VSLAM technology is used for mapping, and therefore a multi-section map is formed.
In one example, the electronic device acquires a second image dataset of the space; according to the acquired second image data set of the space, N maps of the space are established through the instant positioning and mapping technology.
It should be noted that, in practical application, the electronic device may also obtain the map stored by other devices to perform the map fusion operation mentioned in this embodiment, and this embodiment does not limit the source of the map.
In one example, the second images in the second image dataset are placed in the order of acquisition. The second image dataset mainly comprises spatial image data and orientation sensor data. The spatial image data mainly includes each frame of image information and corresponding time stamp information. The direction sensor information may be used to assist in mapping, and may be odometer information or inertial measurement unit (Inertial measurement unit, IMU) information. The odometer information mainly comprises physical output, euler angles and corresponding timestamp information. The IMU information mainly includes acceleration, angular velocity, timestamp information corresponding to each frame of data, and the like. And the electronic equipment processes the acquired second image data by using the vSLAM technology to acquire the timestamp, the physical output and the Euler angle information corresponding to each key frame. In the map building process, if visual information is lost (namely, illumination change is large, movement is severe or the characteristic points of the environment are sparse), the existing vSLAM map data are stored, and initialization is carried out again to build a new map. The process of creating a map by an electronic device is shown in fig. 2, and includes the following steps:
Step 201: a second image dataset of the space is acquired.
Specifically, the electronic device may travel around a preset trajectory for one week, during the travel, capture second image data of the space, record the image data of the space in the travel, and direction sensor data for use in creating a map.
Step 202: let i=1, k=1.
Step 203: and reading the ith second image data.
Specifically, the electronic device sequentially reads information such as feature points, descriptors and the like in the second image data acquired in the operation process, so as to create a map of the space according to the second image data.
Step 204: and according to the ith second image data, performing mapping by using a real-time positioning and mapping technology.
Step 205: and judging whether the established map is interrupted or not.
Specifically, if the map is broken, that is, if the visual tracking fails during the map building process, step 206 is performed, otherwise, step 209 is performed. Since the stability of VSLAM mapping is affected by the surrounding environment and camera motion state, if the ambient light changes severely, the texture is weaker (white wall, etc.), or the camera moves severely (fast rotation, etc.), failure of visual tracking, i.e. "interruption", may be caused.
Step 206: taking the map established at this time as a kth map, and storing the kth map.
Specifically, if the visual tracking fails, the electronic device creates a map based on the first frame second image data to the i second image data after the last visual tracking.
Step 207: and judging whether the second image data set is read completely or not.
Specifically, if the electronic device has already constructed based on all the second image data, i.e. the second image data set is read, the construction process is ended, otherwise, step 208 is executed.
Step 208: let i=i+1, k=k+1. Step 203 is then performed.
Specifically, the electronic device continues to build the next map using VSLAM technology based on the remaining second image data.
Step 209: and judging whether the second image data set is read completely or not.
If yes, go to step 210, otherwise, go to step 211.
Step 210: taking the map established at this time as a kth map, and storing the kth map. The flow is then ended.
Step 211: let i=i+1. Step 203 is then performed.
By performing the above steps, the electronic device can create N maps of the space.
Step 102: and judging whether an overlapping area exists between two adjacent maps.
Specifically, if there is an overlapping area between two adjacent maps, step 103 is executed, otherwise, step 104 is executed.
In one example, the N maps of the created space are arranged in chronological order of creation. When the map is created through the instant positioning and mapping technology, the electronic equipment performs mapping according to the sequence of the time stamps of the acquired second image data, and when N maps of the space are arranged according to the time sequence of the creation, two adjacent maps are the m-1 map and the m-th map. m is an integer greater than 1.
Step 103: map data of an overlapping area in any one of two adjacent maps in which the overlapping area exists is removed. Step 105 is then performed.
Specifically, in the process of mapping by using the VSLAM technology, some errors exist in the pose obtained by tracking. As the path continues, errors in the previous frame are propagated all the way to the back, resulting in the pose of the last frame potentially being very large in the world coordinate system. In addition to locally and globally adjusting the pose using an optimization method, loop closure (loop closure) may also be used to optimize the pose. In order to realize loop detection, in the process of collecting data, repeated path area data is often required to be collected so as to improve the probability of loop detection in the process of VSLAM mapping. Thus, the map often includes duplicate paths. In addition, in some spaces, in order to implement global mapping and positioning of the space, some paths are inevitably required to be repeated in the motion trail, for example, when the paths in the space are in a 'field' shape. When map data of a plurality of maps are mapped to a unified map coordinate system one by one, the integrity and continuity of the map of the space can be maintained, the positioning of robots or pedestrians can be conveniently carried out, and map fusion can be carried out. However, when map fusion is performed, since a duplicate path region may occur between the maps, if no processing is performed, a plurality of maps are directly fused, and when positioning is performed using the fused maps, there are a series of problems:
(1) Because two adjacent maps may have a repeated path area, all track data of each map are directly used for fusion, and the same point is mapped into different coordinates in the map under the condition of physical space, so that the positioning accuracy and the navigation effect are affected.
(2) Because VSLAM positioning is mainly used for obtaining camera pose information by extracting environment characteristic information and carrying out matching with priori map information. When the camera is near the position where the map is broken, the position information of the camera can be obtained by using the maps acquired before and after the broken position, and the position information of the camera are different, so that jump of a positioning result in the area is caused, and the navigation result of pedestrians or robots is influenced.
In this embodiment, before the electronic device performs map fusion of the space, it determines whether an overlapping area exists between two adjacent maps, and if so, map data of the overlapping area of one of the two adjacent maps is removed, so that the situation that two different pieces of position information exist at the same map point in the fused map is reduced, and further the problem of jump of the positioning result is reduced.
Step 104: map data of two adjacent maps are retained.
Specifically, if there is no overlapping area between two adjacent maps, when the two maps are mapped to the same map, there is no case where two pieces of position information are present at the same position, and therefore, the map data of the two maps can be retained.
Step 105: and fusing the remaining map data in each map to obtain a fused map of the space.
Specifically, map data remaining in each map is mapped to the same coordinate system, that is, the same map, to obtain a fused map.
In one example, the map of the space is a visual map (vsram map), and the process of fusing the map data remaining in each map includes the sub-steps of:
for each map, the following operations are performed:
step 1051: and acquiring the direction sensor data corresponding to each key frame of the map according to the time stamp alignment mode. Since the frame rate of the data acquired by the direction sensor is far greater than the key frame rate in the vsram map, the data output by the direction sensor is more compact. Because the situation of 'interruption' may exist in the visual mapping process, the direction sensor data serial number corresponding to each key frame of the vsam map, the direction sensor data serial number corresponding to the starting frame of the vsam map and the direction sensor serial number corresponding to the ending key frame of the vsam map are mainly obtained.
Step 1052: vsram angle substitution. Because there may be "breaks" or no loop detection during vsram mapping, there is an error in the visual output, i.e., in the physical dimensions and orientation. If an interruption condition occurs in the process of drawing, because of certain relativity of the vSLAM, after each interruption and reinitialization, the pose is calculated again, so that discontinuity in the direction is caused. Due to the continuity and stability of the data acquisition of the direction sensor, the data acquired by the direction sensor can be used for replacing a yaw angle acquired by the vSLAM map building, so that errors in the direction under the condition of no loop back can be reduced, and meanwhile, the continuity of the direction in the vSLAM map building process is ensured.
Step 1053: determining coordinates of a starting key frame of the map in the fusion map; and calculating the coordinates of the key frames of the map in the fusion map according to the coordinates of the initial key frames of the map in the fusion map and the coordinates of the direction sensor data corresponding to the key frames of the map in the fusion map.
Specifically, the coordinates corresponding to the initial key frame of the map are mapped to the fusion map, and the coordinates of the initial key frame in the fusion map are determined. If the map is the first map mapped to the fusion map, the coordinates corresponding to the initial key frame of the map can be determined directly according to the mapping result. If the map is the kth map mapped to the fused map, in the event of an interruption, the coordinates of the starting key frame in the fused map are calculated using equations (1) and (2):
x′=x+d*cos(θ′) (1)
y′=y+d*sin(θ′) (2)
wherein (x, y) represents the coordinates of the last frame key frame of the map mapped to the fusion map of the kth-1, d represents the distance between the ending key frame of the map mapped to the kth-1 and the starting key frame of the map mapped to the kth, the distance being determined according to the direction sensor data corresponding to the ending key frame of the map mapped to the kth-1 and the direction sensor data of the starting key frame of the map mapped to the kth, θ ' represents the angle of the map after the direction correction, and (x ', y ') is the coordinates of the starting key frame of the map mapped to the kth, at the fusion map.
After knowing the coordinates of the initial key frame of the map in the fused map, the coordinates of the key frames in the map in the fused map can be calculated according to the direction sensor data corresponding to the key frames of the map by adopting a principle similar to the formulas (1) and (2).
In the process of mapping, there may be an interruption phenomenon, and two maps may exist, and there may be a situation that a map of a space between the two maps is lost, in this case, assuming that coordinates corresponding to the last key frame of the first map are (x, y), in the middle lost map, the direction sensor moves n distances, and distances between sensor data of two frames before and after are d 'respectively' 0 ,d′ 1 ...d′ n The angles of the map after correction at each distance are respectively theta' 0 ,θ′ 1 ...θ′ n The coordinates corresponding to the direction sensor data at each distance can be determined. For a first displacement distance d' 0 The coordinates of the prescription sensor are calculated by using the following formula (3) and formula (4):
x′ 0 =x+d′ 0 *cos(θ′ 0 ) (3)
y′ 0 =y+d′ 0 *sin(θ′ 0 ) (4)
wherein (x, y) represents the coordinates corresponding to the last key frame in the first section of map, (x' 0 ,y′ 0 ) Representing the coordinates of the fusion map corresponding to the first moving distance prescription sensor data, d' 0 Represents the first movement distance, θ' 0 The angle of the map after the first moving distance direction correction is represented.
And so on for the nth movement distance d' n The coordinates of the prescription sensor are calculated by the following formula (5) and formula (6)
x′ n =x′ n-1 +d′ n *cos(θ′ n ) (5)
y′ nyn-1 +d′ n *sin(θ′ n ) (6)
Wherein, (x' n-1 ,y′ n-1 ) Representing the coordinates corresponding to the n-1 th moving distance prescription sensor data, (x' n ,y′ n ) Representing the coordinates, d 'corresponding to the n-th moving distance prescription sensor data' n Represents the nth movement distance, θ' n The angle of the map after the n-th moving distance direction correction is represented.
It should be noted that, in the case of no loop, the map is deviated in scale, and the coordinates of the map can be determined using equations (7), (8) and (9):
x w =x n +ε*d*cos(θ′) (7)
y w =y n +ε*d*sin(θ′) (8)
ε=d t /l (9)
wherein, (x) n ,y n ) Representing the coordinates corresponding to the previous key frame in the map, (x) w ,y w ) Representing coordinates of the fused map, d t Represents the distance between two adjacent key frames in the map, ε represents the distance scale factor, d t The sensor movement distance is represented, l the visual movement distance is represented, and θ' the movement distance direction is corrected to the angle of the map.
It should be noted that, as those skilled in the art will understand, the electronic device may merge multiple maps into any one map, or may merge the remaining map data of multiple maps into a new map.
It should be noted that, as those skilled in the art will understand, in practical application, the map fusion may be implemented in other ways, which are not listed here.
In one example, after judging whether an overlapping area exists between two adjacent maps, the electronic device fuses the remaining map data in each map to obtain a fused map of the space, and then judges whether an overlapping area exists between any two maps; if yes, removing map data of an overlapping area in any one of two maps with the overlapping area; and if not, executing the step of fusing the remaining map data in each map to obtain a fused map of the space.
It is worth mentioning that the electronic device detects whether any one map overlaps all other maps, and after finding that an overlapping condition exists, deletes the map data of the overlapping area in any one of the two maps that overlap, further optimizes the map data used for obtaining the fused map, further reduces the situation that two pieces of position information exist in the same map point, and further reduces the situation that the positioning result jumps.
The foregoing is merely illustrative, and is not intended to limit the technical aspects of the present invention.
Compared with the prior art, in the map fusion method provided in the embodiment, before a plurality of maps of a space are fused, the map data of the plurality of maps are optimized based on whether the overlapping area exists among the plurality of maps, so that the map data of the overlapping area of any one of the maps is removed, the situation that two different position information exist at the same map point in the fused map is reduced, and the problem of jump of a positioning result is further reduced.
A second embodiment of the present invention relates to a map fusion method. In this embodiment, taking two adjacent maps as an mth map and a tth map as an example, the method in which the electronic device determines whether there is an overlapping area between the two adjacent maps in the first embodiment is described as an example.
Specifically, as shown in fig. 3, the present embodiment includes steps 301 to 309, wherein steps 301, 306, 308, and 309 are substantially the same as steps 101, 103 to 105 in the first embodiment, and are not repeated here. The differences are mainly described below:
step 301: n maps of the same space are acquired.
Step 302: a first image dataset of a space corresponding to an mth map is acquired.
Specifically, m and t are positive integers not greater than N, and m+.t.
In one example, the N maps are arranged in chronological order, and a direction of a line when the electronic device collects the first image dataset is the same as a direction of a line when the map is created. Since the t-th map is adjacent to the m-th map, the constraint relation between t and m is: t=m+1 or t=m-1.
Case a: when 0 < m < N, t=m+1, and when m=n, t=1, that is, the t-th map is the map subsequent to the m-th map. In this case, the probability of overlapping the posterior segment path of the mth map and the anterior segment path of the t map is greater, and the electronic device may arrange the first image data in the first image data set in reverse in the photographing order, that is, the earlier photographed first image data is arranged more posterior.
Case B: when 1 < m.ltoreq.N, t=m-1, and when m=1, t=N, that is, the t-th map is the previous map of the m-th map. In this case, the probability of overlapping the anterior segment path of the mth map and the posterior segment path of the t map is greater, and the electronic device may forward arrange the first image data in the first image data set in the shooting order, that is, the earlier the first image data shot is arranged more anterior.
It is worth mentioning that the first image data corresponding to the path with higher probability is first located and detected, so that the overlapping area can be found more quickly under the condition that the overlapping area exists.
It should be noted that, in practical application, a developer may specify whether any two of the N maps are adjacent to each other, and input an instruction file containing the relationship between the N maps into the electronic device, where the electronic device determines the relationship between any two of the N maps according to the instruction file.
In one example, the first image dataset of the space corresponding to the map is the second image dataset, or the first image dataset of the space corresponding to the map is composed of the second image dataset used to build the map.
It is worth mentioning that the image data collected in the mapping process is used for testing whether two adjacent maps are overlapped, so that the data collection amount is reduced.
It should be noted that, in practical application, the first image data set may be a spatial image data set acquired again after the mapping, and the relationship between the first image data set and the second image data set is not limited in this embodiment.
Step 303: and sequentially inputting the first image data in the first image data set into the first positioning model until the first positioning model is successfully positioned based on the input first image data.
Specifically, the first positioning model is used for positioning according to the input first image data and map data of an mth map. The electronic device sequentially reads the first image data in the first image data set, takes the first image data as input of a first positioning model, and if the first positioning model is successfully positioned, the electronic device indicates that the position information of a map point when the first image data is shot can be positioned by using an mth map, and at the moment, whether the map point can be positioned by using the mth map or not needs to be judged so as to determine whether an overlapping area exists between the first image data and the second image data.
Step 304: and inputting the first image data successfully positioned into a second positioning model, and judging whether the second positioning model is successfully positioned.
Specifically, the second positioning model is used for positioning according to the input image data and map data of the t-th map. If the second positioning model is successful, it is indicated that the position information of the map point when the first image data is captured can be positioned by using the t-th map, and step 305 is performed. If the second positioning model fails to locate, step 307 is performed.
It should be noted that, as those skilled in the art will appreciate, the first positioning model and the second positioning model may be algorithm models based on the VSLAM technology. The first positioning model and the second positioning model may be derived from the same model. For example, when the algorithm model based on the VSLAM technology loads the mth map, the algorithm model based on the VSLAM technology is the first positioning model, and when the algorithm model based on the VSLAM technology loads the t map, the algorithm model based on the VSLAM technology is the second positioning model. In this embodiment, in order to distinguish between the different loaded maps, the two models are referred to as a first positioning model and a second positioning model, and are not necessarily two completely uncorrelated models, and the present embodiment does not limit the relationship between the first positioning model and the second positioning model.
Step 305: and determining that an overlap area exists between the mth map and the tth map.
Specifically, since the t-th map and the m-th map can both locate the map point, it is explained that there is an overlapping area between the m-th map and the t-th first image.
Step 306: map data of an overlapping area in any one of two adjacent maps in which the overlapping area exists is removed. Step 309 is then performed.
Specifically, since the t-th map and the m-th map can both locate the map point, if the t-th map point and the m-th map are directly fused into the same map, a jump occurs when the map point is located. The electronic equipment optimizes the map data of the t map or the map data of the m map, and removes the map data of the overlapped area of the t map or the map data of the overlapped area in the m map so as to reduce the jump situation of the positioning result.
In one example, map data of a map includes position information of map points and coordinate indexes of the map points. The process of removing map data of the overlapping area by the electronic device is as follows: acquiring a coordinate index of a map point obtained by positioning of the second positioning model; according to the coordinate index of the map points obtained by positioning, determining the coordinate index of the overlapped map points in the t-th map; and deleting the coordinate index of the overlapped map points in the t-th map and the position information of the map points corresponding to the coordinate index of the overlapped map points in the t-th map.
In this embodiment, taking the map data of the overlapping area in the nth map as an example, the optimization process of the map data is illustrated, and in practical application, the map data of the overlapping area in the mth map may be deleted, and the deletion method may refer to the related content and will not be described herein.
Step 307: and determining that the m-th map and the t-th map have no overlapping area.
Specifically, if the second positioning model cannot perform positioning based on the first image data successfully positioned by the first positioning model, it is indicated that map points corresponding to the first image data successfully positioned by the first positioning model are not overlapped, and probability of map points corresponding to the first image data arranged behind the first image data successfully positioned by the first positioning model is smaller, so that when the second positioning model fails to position based on the first image data successfully positioned, the electronic device determines that an m-th map and a t-th map have no overlapping area.
It is worth mentioning that after the map points corresponding to the first image data with the highest overlapping probability are not overlapped, the testing process of the subsequent first image data is finished, so that the calculated amount of the electronic equipment can be reduced, and the speed of fusing the map by the electronic equipment is improved.
In one example, the electronic device may also sequentially input the remaining first image data into the first positioning model after the second positioning model cannot position the first image data successfully positioned based on the first positioning model until the first image data set is read, or find the first image data that enables the second positioning model to position the first image data successfully. If the first image data which is successfully positioned by the second positioning model and the first positioning model exists, determining that an overlapping area exists, and if the first image data which is successfully positioned by the second positioning model and the first positioning model does not exist, determining that the first image data corresponding to the mth map does not exist.
It is worth mentioning that testing all the first image data improves the accuracy of the electronic device in judging whether to overlap.
Step 308: map data of two adjacent maps are retained.
Specifically, if two adjacent maps are not overlapped, it is indicated that the map data of the two adjacent maps are all available data, so that the map data of the two adjacent maps are retained.
Step 309: and fusing the remaining map data in each map to obtain a fused map of the space.
The process of the electronic device optimizing N maps (process of removing map data of the overlapping area) is exemplified below in conjunction with the scene.
Assume that a map built for a certain space includes N maps in total, and the number of first image data in the first image data set is M. The optimization process of the electronic device for the map data of the N maps includes the following steps as shown in fig. 4:
step 401: let m=n.
Step 402: and loading map data of an mth map.
Step 403: let i=1.
Step 404: and reading the ith first image data, and performing visual positioning based on the ith first image data. The electronic device performs positioning based on the i-th first image data and the m-th map.
Step 405: and judging whether the positioning is successful or not. If the positioning fails, step 406 is executed, and if the positioning is successful, step 407 is executed.
Step 406: let i=i+1, determine if i is greater than M. If i > M, go to step 415, if i.ltoreq.M, go to step 404.
Step 407: the frame number i of the first image data is recorded.
Step 408: it is determined whether m-1 is equal to 0. If yes, go to step 409, otherwise, go to step 410.
Step 409: loading map data of an Nth map. Step 411 is then performed.
Step 410: and loading map data of the m-1 th map.
Step 411: visual localization is performed based on the i-th first image data. Specifically, the electronic device performs positioning based on the i-th first image data and the loaded map.
Step 412: and judging whether the positioning is successful or not. If the positioning fails, go to step 413, and if the positioning is successful, go to step 414.
Step 413: map data of the m-1 st map is retained. Step 416 is then performed.
Step 414: the coordinate index u of the located map point is recorded.
Step 415: and (3) reserving the related data of map points with the coordinate indexes of 0-U in the map data of the m-1 th map, and deleting the related data of the map points with the coordinate indexes of U-U. Wherein U is the total number of coordinate indexes of the m-1 map, and the related data comprises position information.
Step 416: whether m is equal to 1 is determined, if yes, the flow is ended, and if not, step 417 is executed.
Step 417: let m=m-1. Step 402 is then performed.
The foregoing is merely illustrative, and is not intended to limit the technical aspects of the present invention.
Compared with the prior art, in the map fusion method provided in the embodiment, before a plurality of maps of a space are fused, the map data of the plurality of maps are optimized based on whether the overlapping area exists among the plurality of maps, so that the map data of the overlapping area of any one of the maps is removed, the situation that two different position information exist at the same map point in the fused map is reduced, and the problem of jump of a positioning result is further reduced.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A third embodiment of the present invention relates to an electronic device, as shown in fig. 5, including: an acquisition module 501, an optimization module 502 and a fusion module 503. The acquiring module 501 is configured to acquire N maps of the same space; n is an integer greater than 1. The optimizing module 502 is configured to determine whether an overlapping area exists between two adjacent maps; if yes, removing the map data of the overlapping area in any one of the two adjacent maps with the overlapping area; if not, the map data of the adjacent two maps are reserved. The fusion module 503 is configured to fuse the remaining map data in each map to obtain a spatially fused map.
It is to be noted that this embodiment is a system example corresponding to the first embodiment, and can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, a detailed description is omitted here. Accordingly, the related art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module in this embodiment is a logic module, and in practical application, one logic unit may be one physical unit, or may be a part of one physical unit, or may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, units that are not so close to solving the technical problem presented by the present invention are not introduced in the present embodiment, but this does not indicate that other units are not present in the present embodiment.
A fourth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 6, including: at least one processor 601; and a memory 602 communicatively coupled to the at least one processor 601; the memory 602 stores instructions executable by the at least one processor 601, and the instructions are executed by the at least one processor 601, so that the at least one processor 601 can execute the map fusion method according to the above embodiment.
The electronic device includes: one or more processors 601 and a memory 602, one processor 601 being illustrated in fig. 6. The processor 601, the memory 602 may be connected by a bus or otherwise, for example in fig. 6. The memory 602 is a non-volatile computer readable storage medium that can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as maps in the embodiments of the present application, stored in the memory 602. The processor 601 performs various functional applications of the device and data processing, i.e., implements the map fusion method described above, by running non-volatile software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store a list of options, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some implementations, the memory 602 may optionally include memory located remotely from the processor 601, such remote memory being connectable to an external device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 602 that, when executed by the one or more processors 601, perform the map fusion method of any of the method embodiments described above.
The product may perform the method provided by the embodiment of the present application, and have the corresponding functional module and beneficial effect of performing the method, and technical details not described in detail in the embodiment of the present application may be referred to the method provided by the embodiment of the present application.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the invention and that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A map fusion method, applied to an electronic device, comprising:
acquiring N maps of the same space; n is an integer greater than 1;
judging whether an overlapping area exists between two adjacent maps or not; the overlapping area is used for indicating repeated path areas in two adjacent maps;
if yes, removing map data of the overlapping area in any one of two adjacent maps with the overlapping area;
if not, reserving map data of two adjacent maps;
fusing the remaining map data in each map to obtain a fused map of the space;
wherein said fusing map data remaining in each of said maps comprises:
for each map, acquiring direction sensor data corresponding to each key frame of the map according to a time stamp alignment mode;
replacing a yaw angle obtained by vSLAM mapping with the direction sensor data;
And determining the coordinates of the initial key frame of the map in the fusion map, and calculating the coordinates of the key frames of the map in the fusion map according to the coordinates of the initial key frame of the map in the fusion map and the coordinates of the direction sensor data corresponding to the key frames of the map in the fusion map.
2. The map fusion method according to claim 1, wherein the two adjacent maps are an mth map and a tth map, m, t are positive integers not greater than N, and m+notet;
the judging whether an overlapping area exists between two adjacent maps specifically comprises:
acquiring a first image dataset of the space corresponding to the mth map;
sequentially inputting first image data in the first image data set into a first positioning model until the first positioning model is successfully positioned based on the input first image data; the first positioning model is used for positioning according to the input first image data and the map data of the m-th map;
inputting the first image data successfully positioned into a second positioning model, and judging whether the second positioning model is successfully positioned; the second positioning model is used for positioning according to the input image data and the map data of the t-th map;
And if so, determining that an overlapping area exists between the mth map and the t-th map.
3. The map fusion method according to claim 2, wherein the N maps are arranged in chronological order, and a travel direction when the electronic device collects the first image data set is the same as a travel direction when the map is built;
the constraint relation of m and t is as follows: when 0 < m < N, t=m+1, and when m=N, t=1; the first image data in the first image data set are reversely arranged according to the shooting sequence; or alternatively, the process may be performed,
the constraint relation of m and t is as follows: when m is more than 1 and less than or equal to N, t=m-1, and when m=1, t=N; the first image data in the first image data set is arranged forward in the shooting order.
4. The map fusion method according to claim 2, wherein the removing of the map data of the overlapping area in any one of the two adjacent maps in which the overlapping area exists, specifically comprises:
acquiring a coordinate index of a map point obtained by positioning the second positioning model;
according to the coordinate index of the map points obtained by positioning, determining the coordinate index of the overlapped map points in the t-th map;
And deleting the coordinate index of the overlapped map points in the t-th map and the position information of the map points corresponding to the coordinate index of the overlapped map points in the t-th map.
5. The map fusion method according to any one of claims 1 to 4, wherein after said determining whether or not there is an overlap region between two adjacent ones of the maps, the map fusion method further comprises, before the fusing of the map data remaining in each of the maps to obtain a fused map of the space:
judging whether an overlapping area exists between any two maps or not;
if yes, removing map data of an overlapping area in any one of the two maps with the overlapping area;
and if not, executing the step of fusing the remaining map data in each map to obtain a fused map of the space.
6. The map fusion method according to claim 1, wherein the acquiring N maps of the same space specifically includes:
acquiring a second image dataset of the space;
and according to the acquired second image data set of the space, establishing N maps of the space through an instant positioning and mapping technology.
7. The map fusion method of claim 6, wherein the second images in the second image dataset are placed in order of acquisition;
the method for establishing N maps of the space through the instant positioning and mapping technology according to the acquired second image data set of the space specifically comprises the following steps:
let i=1, k=1;
reading an ith second image data;
according to the ith second image data, performing image construction through an instant positioning and image construction technology;
judging whether the established map is interrupted or not;
if yes, taking the map established at this time as a kth map, and storing the kth map; judging whether the second image data set is read completely or not; if yes, ending the drawing construction; otherwise, let i=i+1, k=k+1, and re-execute the step of reading the ith second image data;
if not, judging whether the second image data set is read completely; if yes, taking the map established at this time as a kth map, storing the kth map, and ending the map establishment; otherwise, let i=i+1, and re-execute the step of reading the ith second image data.
8. The map fusion method according to claim 6, wherein the first image data set of the space to which the map corresponds is the second image data set, or the first image data set of the space to which the map corresponds is composed of the second image data set used for creating the map.
9. An electronic device, comprising: at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the map fusion method of any one of claims 1 to 8.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the map fusion method of any one of claims 1 to 8.
CN201910700973.9A 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium Active CN110415174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910700973.9A CN110415174B (en) 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910700973.9A CN110415174B (en) 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN110415174A CN110415174A (en) 2019-11-05
CN110415174B true CN110415174B (en) 2023-07-07

Family

ID=68364517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910700973.9A Active CN110415174B (en) 2019-07-31 2019-07-31 Map fusion method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN110415174B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110986969B (en) * 2019-11-27 2021-12-28 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN111652934B (en) * 2020-05-12 2023-04-18 Oppo广东移动通信有限公司 Positioning method, map construction method, device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5699771B2 (en) * 2011-04-14 2015-04-15 アイシン・エィ・ダブリュ株式会社 MAP IMAGE DISPLAY SYSTEM, MAP IMAGE DISPLAY DEVICE, MAP IMAGE DISPLAY METHOD, AND COMPUTER PROGRAM
CN109029422B (en) * 2018-07-10 2021-03-05 北京木业邦科技有限公司 Method and device for building three-dimensional survey map through cooperation of multiple unmanned aerial vehicles
WO2020014941A1 (en) * 2018-07-20 2020-01-23 深圳前海达闼云端智能科技有限公司 Map establishment method, positioning method and apparatus, terminal and storage medium
CN109285117A (en) * 2018-09-05 2019-01-29 南京理工大学 A kind of more maps splicing blending algorithm based on map feature
CN109978755B (en) * 2019-03-11 2023-03-17 广州杰赛科技股份有限公司 Panoramic image synthesis method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110415174A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN111951397B (en) Method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
JP7326720B2 (en) Mobile position estimation system and mobile position estimation method
KR20190090393A (en) Lane determining method, device and storage medium
WO2020042349A1 (en) Positioning initialization method applied to vehicle positioning and vehicle-mounted terminal
US8437501B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN111179311A (en) Multi-target tracking method and device and electronic equipment
CN109074638B (en) Fusion graph building method, related device and computer readable storage medium
KR102564430B1 (en) Method and device for controlling vehicle, and vehicle
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN111274847B (en) Positioning method
CN110415174B (en) Map fusion method, electronic device and storage medium
CN111094895A (en) System and method for robust self-repositioning in pre-constructed visual maps
US9443349B2 (en) Electronic apparatus and method for incremental pose estimation and photographing thereof
CN112819860A (en) Visual inertial system initialization method and device, medium and electronic equipment
CN111179309B (en) Tracking method and device
CN116858215B (en) AR navigation map generation method and device
CN116958452A (en) Three-dimensional reconstruction method and system
Katragadda et al. Nerf-vins: A real-time neural radiance field map-based visual-inertial navigation system
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
WO2020019116A1 (en) Multi-source data mapping method, related apparatus, and computer-readable storage medium
US11964401B2 (en) Robot globally optimal visual positioning method and device based on point-line features
CN110148205A (en) A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN113628284A (en) Pose calibration data set generation method, device and system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant