CN114092655A - Map construction method, device, equipment and storage medium - Google Patents

Map construction method, device, equipment and storage medium Download PDF

Info

Publication number
CN114092655A
CN114092655A CN202111316167.5A CN202111316167A CN114092655A CN 114092655 A CN114092655 A CN 114092655A CN 202111316167 A CN202111316167 A CN 202111316167A CN 114092655 A CN114092655 A CN 114092655A
Authority
CN
China
Prior art keywords
map
image
point cloud
cloud structure
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111316167.5A
Other languages
Chinese (zh)
Inventor
谢日旭
王明晖
赵铮
魏晓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202111316167.5A priority Critical patent/CN114092655A/en
Publication of CN114092655A publication Critical patent/CN114092655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application discloses a method, a device, equipment and a storage medium for constructing a map, and belongs to the technical field of map processing. The method comprises the following steps: acquiring a first image set, wherein the first image set comprises a plurality of images of an area of a map to be constructed; marking interest points in the first image set; performing three-dimensional reconstruction on the basis of the first image set marked with the interest points, and obtaining a first point cloud structure according to a reconstruction result; acquiring a first map of an area based on a first point cloud structure; and acquiring a second map for completing the interest point annotation based on the first map. According to the map construction method and the map construction system, only images need to be collected when the map is constructed, the base map is not relied on, and the latest map can be accurately constructed even if the latest base map is lacked. And because the marking of the interest points of the image is completed before reconstruction, when the interest points are marked on the map subsequently, the construction of the map can be completed only by projecting the interest points on the map, and the marking of the interest points has low cost and high efficiency.

Description

Map construction method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of map processing, in particular to a method, a device, equipment and a storage medium for constructing a map.
Background
With the development of technology, building structures and distribution become more and more complex, and the situation that people cannot find destinations smoothly often occurs. In order to help people find a destination accurately and quickly even if they are unfamiliar with the environment, a map is constructed and used to guide the user.
In the related art, a commonly used method for constructing a map is to initially construct a map by using an RTK (Real time kinematic) network model, or to obtain a building base map, initially construct a map based on the building base map, and manually mark a Point of Interest (POI) position on the initially constructed map to complete the construction of the map.
In the method, the map is constructed by using the RTK network model technology, and the RTK network model has high running cost; the method for constructing the map based on the building base map has the advantages that due to the fact that the building base map is poor in timeliness, the map constructed based on the building base map is low in accuracy, POI (point of interest) is marked on the initially constructed map manually, cost is high, and efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for constructing a map, which can be used for solving the problems in the related art. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for constructing a map, where the method includes:
acquiring a first image set, wherein the first image set comprises a plurality of images of an area of a map to be constructed;
marking interest points in the first image set;
performing three-dimensional reconstruction on the basis of the first image set marked with the interest points, and obtaining a first point cloud structure according to a reconstruction result;
obtaining a first map of the area based on the first point cloud structure;
and acquiring a second map with the interest point labeling completed based on the first map.
In a possible implementation manner, the three-dimensional reconstruction based on the first image set after the interest point is labeled, and obtaining a first point cloud structure according to a reconstruction result includes:
extracting the image characteristics of each image in the first image set after the interest points are marked;
performing image feature pairing based on the image features of the images to obtain an image pair;
and performing three-dimensional reconstruction based on the image pair, and acquiring a first point cloud structure according to a reconstruction result.
In a possible implementation manner, the performing three-dimensional reconstruction based on the image pair and obtaining a first point cloud structure according to a reconstruction result includes:
obtaining the position relation between a single image in the first image set after the interest point is marked and the image pair, wherein the position relation is embodied in a coordinate point form under the same coordinate system;
acquiring a second point cloud structure based on the position relation;
iteratively optimizing the second point cloud structure to obtain a third point cloud structure with a controllable error range;
filtering point clouds reconstructed based on wrong image feature matching in the third point cloud structure;
and in response to the end condition being met, taking the filtered third point cloud structure as a first point cloud structure, wherein the end condition is the position relation between all images in the first image set after the marked interest point is obtained and the image pair.
In a possible implementation manner, the obtaining a positional relationship between a single image in the first image set after the annotating the interest point and the image pair includes:
acquiring a three-dimensional point cloud structure based on the initialized image pair;
iteratively optimizing the three-dimensional point cloud structure;
filtering the point cloud reconstructed based on the wrong image feature pairing in the optimized three-dimensional point cloud structure;
and performing incremental registration on the preprocessed image pair based on the image pair in the first image set after the interest point is marked, and acquiring the position relation between the image in the first image set after the interest point is marked and the three-dimensional point cloud structure after the filtering optimization based on the three-dimensional point cloud structure after the filtering optimization.
In one possible implementation, the obtaining a first map of the area based on the first point cloud structure includes:
acquiring a first plane where an image acquisition device is located based on a pose sequence of the image acquisition device, wherein the pose sequence is acquired in the process of performing three-dimensional reconstruction based on a first image set after the interest points are marked;
and projecting the pose sequence of the first point cloud structure and the image acquisition device to the first plane, and acquiring the first map according to a projection result.
In a possible implementation manner, after the obtaining of the second map with the completed interest point labeling based on the first map, the method further includes:
when the regional structure changes, the map is reconstructed.
In a possible implementation manner, after the obtaining of the second map with the completed interest point labeling based on the first map, the method further includes:
when the interest point is changed and the area structure is not changed, acquiring a second image set, and marking the interest point in the second image set, wherein the second image set comprises an image of the interest point changed area and an image of the interest point unchanged area, and the interest point unchanged area is connected with the interest point changed area;
performing three-dimensional reconstruction based on the second image set marked with the interest points and the first point cloud structure, and acquiring a fourth point cloud structure according to a reconstruction result;
and updating the map based on the fourth point cloud structure.
In a possible implementation manner, the updating the map based on the fourth point cloud structure includes:
pruning replacement candidate frames in a fourth point cloud structure, the replacement candidate frames being determined based on a threshold;
projecting the fourth point cloud structure with the deleted replacement candidate frame to the first plane, and acquiring a third map based on a projection result;
and acquiring a fourth map based on the third map, wherein the fourth map is a map with interest point labeling completed.
In another aspect, an apparatus for constructing a map is provided, the apparatus comprising:
the map construction method comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image set, and the first image set comprises a plurality of images of an area of a map to be constructed;
the marking module is used for marking the interest points in the first image set;
the first reconstruction module is used for performing three-dimensional reconstruction on the basis of the first image set marked with the interest points and obtaining a first point cloud structure according to a reconstruction result;
a second obtaining module, configured to obtain a first map of the area based on the first point cloud structure;
and the third acquisition module is used for acquiring a second map which finishes the interest point annotation based on the first map.
In one possible implementation manner, the first reconstruction module includes:
the extraction unit is used for extracting the image characteristics of each image in the first image set after the interest points are marked;
the characteristic matching unit is used for matching image characteristics based on the image characteristics of the images to obtain an image pair;
and the three-dimensional reconstruction unit is used for performing three-dimensional reconstruction based on the image pair and acquiring a first point cloud structure according to a reconstruction result.
In one possible implementation manner, the three-dimensional reconstruction unit includes:
the position relation obtaining subunit is configured to obtain a position relation between a single image in the first image set after the interest point is marked and the image pair, where the position relation is represented in a coordinate point form in the same coordinate system;
a point cloud structure obtaining subunit, configured to obtain a second point cloud structure based on the position relationship;
the optimization subunit is used for iteratively optimizing the second point cloud structure to obtain a third point cloud structure with a controllable error range;
the filtering subunit is used for filtering point clouds reconstructed based on the wrong image feature pairs in the third point cloud structure;
and the determining subunit is configured to, in response to a condition that an end condition is met, use the filtered third point cloud structure as a first point cloud structure, where the end condition is a position relationship between all images in the first image set after the marked interest point is acquired and the image pair.
In a possible implementation manner, the position relation obtaining subunit is configured to obtain a three-dimensional point cloud structure based on the initialized image pair; iteratively optimizing the three-dimensional point cloud structure; filtering the point cloud reconstructed based on the wrong image feature pairing in the optimized three-dimensional point cloud structure; and performing incremental registration on the preprocessed image pair based on the image pair in the first image set after the interest point is marked, and acquiring the position relation between the image in the first image set after the interest point is marked and the three-dimensional point cloud structure after the filtering optimization based on the three-dimensional point cloud structure after the filtering optimization.
In a possible implementation manner, the second obtaining module is configured to obtain a first plane where an image acquisition device is located based on a pose sequence of the image acquisition device, where the pose sequence is obtained in a process of performing three-dimensional reconstruction based on a first image set after the labeled interest point; and projecting the pose sequence of the first point cloud structure and the image acquisition device to the first plane, and acquiring the first map according to a projection result.
In one possible implementation, the apparatus further includes:
and the building module is used for reconstructing the map when the regional structure is changed.
In one possible implementation, the apparatus further includes:
the acquisition module is used for acquiring a second image set and marking the interest points in the second image set when the interest points are changed and the area structure is not changed, wherein the second image set comprises images of interest point changed areas and images of interest point unchanged areas, and the interest point unchanged areas are connected with the interest point changed areas;
the second reconstruction module is used for performing three-dimensional reconstruction on the basis of the second image set marked with the interest point and the first point cloud structure, and acquiring a fourth point cloud structure according to a reconstruction result; and the updating module is used for updating the map based on the fourth point cloud structure.
In a possible implementation manner, the updating module is configured to prune a replacement candidate frame in a fourth cloud structure, where the replacement candidate frame is determined based on a threshold; projecting the fourth point cloud structure with the deleted replacement candidate frame to the first plane, and acquiring a third map based on a projection result; and acquiring a fourth map based on the third map, wherein the fourth map is a map with interest point labeling completed.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one computer program is stored in the memory, and the at least one computer program is loaded by the processor and executed to enable the computer device to implement any one of the above methods for constructing a map.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor, so as to make a computer implement any one of the above-mentioned methods for constructing a map.
In another aspect, a computer program product or a computer program is also provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute any one of the above methods for constructing a map.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
in the map construction, only images of an area of the map to be constructed need to be collected, the map can be obtained through three-dimensional reconstruction based on the collected images, a base map supplier is not relied on, and even if the latest base map is lacked, the map with high timeliness can be accurately constructed. Meanwhile, because the work of marking the interest points of the image is finished before the three-dimensional reconstruction, when the interest points of the map are marked subsequently, the construction of the map can be finished only based on the previously marked interest points, and the construction of the map is low in cost and high in efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flowchart of a method for constructing a map according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 4 is a flowchart of a method for three-dimensional reconstruction according to an embodiment of the present application;
FIG. 5 is a flow chart for constructing a map according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of a method for updating a map according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an apparatus for constructing a map according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an apparatus for constructing a map according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
An embodiment of the present application provides a method for constructing a map, please refer to fig. 1, which shows a schematic diagram of an implementation environment of the method provided in the embodiment of the present application. The implementation environment may include: a terminal 11 and a server 12.
The terminal 11 is installed with an application program capable of acquiring an image, and after the application program acquires the image, the acquired image is sent to the server 12, and the server 12 can use the method provided by the embodiment of the present application to construct a map, and the server 12 stores the map. Alternatively, the terminal 11 may obtain the map from the server 12, and the map is stored by the terminal 11.
Or, the terminal 11 is installed with an application program capable of acquiring an image, and after the application program acquires the image, the terminal 11 constructs a map based on the method provided by the embodiment of the application, and the map is stored by the terminal 11. Alternatively, the server 12 may obtain the map from the terminal 11, and the map is stored by the server 12.
Or, the terminal 11 acquires the image from the server 12, constructs a map based on the method provided by the embodiment of the application, and the terminal 11 stores the map. Alternatively, the server 12 may obtain the map from the terminal 11, and the map is stored by the server 12.
Alternatively, the terminal 11 may be any electronic product capable of performing man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment, such as a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a PPC (Pocket PC, palmtop), a tablet Computer, a smart car, a smart television, a smart speaker, and the like. The server 12 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
It should be understood by those skilled in the art that the above-mentioned terminal 11 and server 12 are only examples, and other existing or future terminals or servers may be suitable for the present application and are included within the scope of the present application and are herein incorporated by reference.
Based on the implementation environment shown in fig. 1, an embodiment of the present application provides a method for constructing a map, as shown in fig. 2, where the method for constructing a map can be executed by a terminal or a server, and for example, the method is applied to a terminal, and the method includes steps 201 to 205.
In step 201, a first image set is acquired, the first image set comprising a plurality of images of an area of a map to be constructed.
In one possible implementation, the obtaining of the first image set includes: and acquiring an image covering the area of the map to be constructed by using an image acquisition device, and taking the acquired image as a first image set.
The embodiment of the application does not limit the image acquisition device, and optionally, the image acquisition device is a device installed and operated with an application program capable of realizing interest point labeling. Alternatively, the image capturing device is a general device capable of image capturing.
In step 202, points of interest in the first image set are labeled.
The point of interest in the first image set can be labeled by any device, for example, the image acquisition device is a device installed and running an application program capable of realizing the point of interest labeling, and the application program is called to complete the point of interest labeling in the acquisition process. Taking an image acquisition device as an example of a common device capable of acquiring images, inputting an acquired first image set into a first network model, and labeling interest points in the first image set based on the first network model. The first network model is a network model capable of recognizing text information in the image.
In step 203, three-dimensional reconstruction is performed based on the first image set after the interest points are labeled, and a first point cloud structure is obtained according to a reconstruction result.
The process of performing three-dimensional reconstruction on the first image set after the interest points are labeled and obtaining a first point cloud structure according to a reconstruction result is shown in fig. 3: extracting image features of all images in the first image set after the interest points are marked; performing image feature pairing based on the image features of the images to obtain an image pair; and performing three-dimensional reconstruction based on the image pair, and acquiring a first point cloud structure according to a reconstruction result.
The extracted image features are not limited, and optionally, the feature points of the images in the first image set after the interest points are labeled and the descriptors of the corresponding feature points are extracted. Optionally, the descriptor is a feature vector corresponding to the feature point.
After extracting the image features of the respective images in the first image set, image feature pairing may be performed based on the image features. In a possible implementation manner, the matching degrees of the two image features are ranked based on the descriptors, and the two image features with the highest matching degree are selected according to the ranking result, so that the image feature pairing is completed, and the image pair is obtained.
In one possible implementation, as shown in fig. 3, after completing the image feature matching, the image pair may be geometrically verified, and regarding the way of geometric verification, optionally, the distance between the image pairs is calculated, and the image pairs with the distance larger than a preset value are discarded, so as to optimize the image pair.
After the image pair is acquired, three-dimensional reconstruction can be performed based on the image pair, and the first point cloud structure is acquired according to a reconstruction result. In a possible implementation manner, taking the method schematic diagram of three-dimensional reconstruction shown in fig. 3 as an example, as shown in fig. 4, the process of obtaining the first point cloud Structure by using SFM (Structure from Motion) incremental three-dimensional reconstruction includes the following steps 2031-2035.
Step 2031: and acquiring the position relation between a single image in the first image set and the image pair after the interest point is marked, wherein the position relation is embodied in a coordinate point form under the same coordinate system.
As shown in fig. 3, in a possible implementation manner, before obtaining the positional relationship between a single image in the first image set and the image pair after the point of interest is labeled, the method further includes: the image pair is initialized. As for the manner of selecting the image pair to be initialized, optionally, the image pair matching the most image features is selected as the image pair to be initialized. Optionally, based on the matching quality, the image pair with the highest image feature matching degree is selected as the image pair to be initialized. And determining the coordinate origin and the scale of a world coordinate system required by three-dimensional reconstruction by initializing the image pair.
After initializing the image pair, the initialized image pair may be input to the incremental reconstruction sub-module for pre-processing. And the incremental reconstruction submodule is a submodule of the three-dimensional reconstruction model. Illustratively, a three-dimensional point cloud structure is acquired based on the initialized image pair; iteratively optimizing a three-dimensional point cloud structure; filtering the point cloud reconstructed based on the wrong image feature pairing in the optimized three-dimensional point cloud structure; and performing incremental registration on the preprocessed image pair based on the image pair in the first image set after the interest point is marked, and acquiring the position relation between the image in the first image set after the interest point is marked and the three-dimensional point cloud structure after the interest point is filtered and optimized based on the three-dimensional point cloud structure after the interest point is marked.
According to the method and the device for acquiring the three-dimensional point cloud structure, the mode of acquiring the three-dimensional point cloud structure based on the initialized image is not limited, and in a possible implementation mode, the three-dimensional point cloud structure can be acquired through triangulation.
After the three-dimensional point cloud structure is obtained, iterative optimization can be performed on the three-dimensional point cloud structure. Optionally, a least square optimization problem is constructed, three-dimensional point cloud errors are optimized through iterative computation, the error range of the three-dimensional point cloud structure is controllable, and therefore the three-dimensional point cloud structure with higher quality is obtained. After the three-dimensional point cloud structure is iteratively optimized, the three-dimensional point cloud structure can be filtered, and three-dimensional point cloud obtained through triangulation based on wrong image feature matching is filtered. In the filtering process, the wrong image feature pairing relationship is also filtered, so that the accuracy of the point cloud structure reconstructed based on the image feature pairing is improved in the subsequent incremental reconstruction.
After the image pair is preprocessed, the images in the first image set may be input to an incremental reconstruction sub-module for incremental registration. In a possible implementation manner, incremental registration is implemented based on a motion relationship between two dimensions and three dimensions by using a PNP (passive-n-Point) network model based on the positions of feature points in an image pair and an image feature matching relationship between an input image and the image pair. Wherein the positions of the feature points in the image pair are obtained during preprocessing of the image pair. The image feature matching relationship of the input image and the image pair is obtained by image feature pairing. The motion relation between two dimensions and three dimensions is obtained in the process of obtaining a three-dimensional point cloud structure through central and triangular positioning based on a preprocessed image, and in the process of incremental registration, pose information of an image acquisition device is obtained through calculation.
By incremental registration, the positional relationship between the input image and the image pair, i.e., coordinate points in the same coordinate system, is acquired. The coordinate system used for unification is the world coordinate system determined by initializing the image pair.
Step 2032: and acquiring a second point cloud structure based on the position relation.
The embodiment of the application does not limit the manner of obtaining the second point cloud structure, and in a possible implementation manner, the second point cloud structure is obtained based on the position relation by utilizing triangulation.
Step 2033: and iteratively optimizing the second point cloud structure to obtain a third point cloud structure with a controllable error range.
Regarding the way of iteratively optimizing the second point cloud structure, optionally, a least squares optimization problem is constructed and the second point cloud structure is optimized by iterative computation. In the iterative optimization process, the second point cloud structure is optimized to obtain a third point cloud structure with controllable error range and higher quality, and the pose sequence of the image acquisition device calculated in the incremental registration process is also optimized.
Step 2034: and filtering the point cloud reconstructed based on the wrong image feature pairing in the third point cloud structure.
Besides filtering the point cloud reconstructed based on the wrong image feature matching in the third point cloud structure, the wrong image feature matching information can be filtered, so that the image feature matching quality in the subsequent three-dimensional reconstruction process and the accuracy of the point cloud structure obtained based on the image feature matching are improved, and the task amount in the subsequent filtering process is reduced.
Step 2035: and in response to the end condition being met, taking the filtered third point cloud structure as a first point cloud structure, wherein the end condition is the position relation between all images and the image pair in the first image set after the interest point is marked.
In step 204, a first map of the area is obtained based on the first point cloud structure.
The embodiment of the application does not limit the manner of obtaining the first map based on the first point cloud structure, and includes but is not limited to: acquiring a first plane where the image acquisition device is located based on a pose sequence of the image acquisition device, wherein the pose sequence is acquired based on a process of performing three-dimensional reconstruction on a first image set after the interest points are marked; and projecting the pose sequence of the first point cloud structure and the image acquisition device to a first plane, and acquiring a first map according to a projection result.
In a possible implementation manner, the first plane where the image capturing device is located is restored by performing plane fitting on the pose sequence and using a Random Sample Consensus (Random Sample Consensus) method. After the first plane is obtained, the first point cloud structure and the pose sequence of the image acquisition device can be projected to the first plane, a two-dimensional point cloud map is obtained, and the two-dimensional point cloud map is used as the first map.
During the projection process, the basic topology needs to be determined based on the key frames in the first point cloud structure. The determination method of the key frame is not limited in the embodiment of the application, and optionally, the image frame registered in the three-dimensional reconstruction process is the key frame. Optionally, based on the frequency requirement of the three-dimensional reconstruction on the image frames, the image frames meeting the frequency requirement are selected as the key frames.
By restoring the first plane and projecting the first point cloud structure to the first plane, the error range can be effectively controlled, the accuracy of the map is improved, and meanwhile, the pose sequence is projected to the first plane to determine the pose information of the key frame, such as the direction of the key frame.
In step 205, a second map with the interest point labeling completed is obtained based on the first map.
The method for obtaining the second map with the interest point marked based on the first map is not limited, and a fifth map is obtained exemplarily; and acquiring a map with higher definition based on the fifth map and the first map, and marking the interest points on the map with higher definition to obtain a second map. Optionally, the higher definition means that the layout of the map is clearer relative to the first map.
Optionally, the fifth map is a Computer Aided Drafting (CAD) map of the mall. Alternatively, when the first map is of limited extent, the fifth map is a map that can provide matched pairs of images in the first set of images. Taking the example that the first map is limited to the main lane of the shopping mall, the obtained first map cannot show the area of the shop, the matching is not complete, and the fifth map is a map capable of displaying the area of the shop.
And after the map with higher definition is obtained, marking interest points on the map to obtain a second map. According to the method and the device for marking the interest points, the mode for marking the interest points is not limited, optionally, the positions of the interest points in the first point cloud structure are determined based on the mapping relation between the two-dimensional image feature points and the three-dimensional point cloud, and the interest points are projected onto a map with higher definition to obtain a second map.
Wherein the mapping relation between the two-dimensional image characteristic points and the three-dimensional point cloud is obtained in image registration.
Besides acquiring a map with higher definition based on the fifth map and marking the interest points on the map, when the matching of the acquired images in the first image set is complete, the acquired first map has higher definition, and the interest points can be marked on the first map directly.
In a possible implementation manner, when the POI of the map changes, the user may initiate a request for updating the map when the POI changes, so as to update the map.
The embodiment of the present application does not limit the manner in which the map is updated. Optionally, when the regional structure changes, the map is reconstructed in the manner of step 201 to step 205.
Optionally, in addition to reconstructing the map, when the interest point is changed and the structure of the area is not changed, acquiring a second image set, and labeling the interest point in the second image set, where the second image set includes an image of the interest point changed area and an image of the interest point unchanged area, and the interest point unchanged area is connected to the interest point changed area; performing three-dimensional reconstruction based on the second image set marked with the interest points and the first point cloud structure, and acquiring a fourth point cloud structure according to a reconstruction result; and updating the map based on the fourth point cloud structure.
In the embodiment of the present application, the area range of the images in the second image set is not limited, and for example, if the changed interest point areas are concentrated together, the nearby unchanged interest point areas are selected to obtain the second image set with the minimum acquisition amount as a target, for example, when the map in the mall room is updated, the shop images of the changed interest points and the shop images of three unchanged interest points near the shop are acquired. The way in which the points of interest of the second set of images are labeled is shown in step 202.
After the second image set is obtained, three-dimensional reconstruction may be performed based on the second image set and the first point cloud structure, and a fourth point cloud structure is obtained according to a reconstruction result, in a possible implementation manner, the second image set on which image feature matching is completed is input to the incremental reconstruction submodule, a new point cloud is registered in the first point cloud structure, and the fourth point cloud structure is obtained, which is implemented in steps 2031 to 2035.
After the fourth point cloud structure is obtained, the map may be updated based on the fourth point cloud structure. Regarding the manner of updating the map based on the fourth point cloud structure, including but not limited to: pruning the replacement candidate frames in the fourth point cloud structure, wherein the replacement candidate frames are determined based on a threshold value; projecting the fourth point cloud structure with the deleted replacement candidate frames to the first plane, and acquiring a third map based on the projection result; and acquiring a fourth map based on the third map, wherein the fourth map is the map with the interest point labeling completed.
The replacement candidate frame is a key frame in the first point cloud structure that is a threshold from the acquisition frame determined during acquisition of the second image set. Since the fourth point cloud structure is obtained by performing incremental reconstruction based on the first point cloud structure, the candidate replacement frame in the first point cloud structure is also located at the same position of the fourth point cloud structure. By pruning the replacement candidate frames, the pruning of the old key frames that will be changed is completed, leaving the latest key frames. The setting mode of the threshold is not limited in the embodiment of the application, and optionally, the channel distance is used as the threshold based on the channel distance or the channel width of the interest point in the map to be updated. Optionally, based on the channel distance or the channel width of the point of interest in the map to be updated, half of the channel distance is used as a threshold.
After the replacement candidate frame is deleted, the obtained new three-dimensional point cloud structure can be projected to the first plane to obtain a new two-dimensional point cloud map, and the obtained two-dimensional point cloud map is used as a third map.
After the third map is obtained, a fourth map can be obtained based on the third map, and the fourth map is a map with the interest point labeling completed. A fourth map is obtained based on the third map as in step 204.
In summary, the method for constructing the map provided by the embodiment of the application completes the construction of the map based on the acquired image, does not depend on a base map supplier, and can accurately construct the map even if the latest base map is lacked. Meanwhile, because the light-weight interest point labeling work is completed in the image acquisition process, the construction of the map can be completed only by projecting the interest point on the map when labeling the interest point on the map subsequently, and the map construction has low cost and high efficiency.
When the map is updated, a new image can be incrementally registered on the basis of the first point cloud structure, and as the unchanged interest point area near the changed interest point area is selected in the acquisition process, the quick matching can be realized on the basis of the image characteristics of the unchanged interest point area image, the point cloud structure which needs to be replaced and has the interest point change can be accurately found, and the map can be updated quickly and efficiently. Even if the regional structure changes, the method for incrementally registering the new image is not applicable, and the method for constructing the map provided by the embodiment of the application is low in cost and low in cost for reconstructing the map.
In one possible implementation, please refer to fig. 5 in detail, and fig. 5 is a flowchart for constructing a map according to an embodiment of the present application. Taking the method applied to the server as an example for explanation, as shown in fig. 5, when the method for constructing a map is used for constructing an indoor map of a shopping mall, the method includes the following steps.
Step 501: and collecting pictures and marking interest points.
Illustratively, a device with an application program capable of realizing point of interest labeling is used for collecting a plurality of groups of images, the collected images can cover the indoor environment of a shopping mall, and the point of interest labeling is completed through the device in the collecting process.
Step 502: and carrying out three-dimensional reconstruction on the indoor environment of the market by using the picture.
Step 503: and converting the three-dimensional reconstructed point cloud and key frame information into a map.
Step 504: and marking the collected interest points to a map through the key frame poses.
After the construction of the indoor map of the mall is completed, when the mall is changed individually, the indoor map can be updated, and the content is shown in fig. 6 in detail.
Step 601: continuously acquiring pictures of the unchanged interest point part and pictures of the changed interest point part.
Illustratively, the collected areas where no store changes have occurred and the areas where store changes have occurred are connected.
Step 602: and registering a new acquisition frame in the generated point cloud based on the acquired image, and selecting a certain distance threshold value as a replacement candidate frame.
Exemplarily, the channel distance of the interest point in the map to be updated is selected as a threshold, and the key frame with a certain distance threshold from the acquisition frame is used as a candidate replacement frame.
Step 603: and updating the point cloud and the key frame information based on the replacement candidate frame, and projecting the updated point cloud and key frame information into the map.
Illustratively, deleting the replacement candidate frame to complete the updating of the point cloud structure and the key frame information, and projecting the updated point cloud structure onto a first plane where the original market map is located to obtain a new two-dimensional point cloud map.
Step 604: and correspondingly processing and updating the interest points according to the marked interest point change forms and the key frame information.
And the updating of the indoor map of the shopping mall is completed by updating the information of the interest points, namely the shops.
Referring to fig. 7, an embodiment of the present application provides an apparatus for constructing a map, including: a first obtaining module 701, an annotating module 702, a first reconstructing module 703, a second obtaining module 704, and a third obtaining module 705.
A first obtaining module 701, configured to obtain a first image set, where the first image set includes a plurality of images of an area of a map to be constructed;
an annotation module 702, configured to annotate the interest points in the first image set;
a first reconstruction module 703, configured to perform three-dimensional reconstruction on the basis of the first image set after the interest point is labeled, and obtain a first point cloud structure according to a reconstruction result;
a second obtaining module 704, configured to obtain a first map of the area based on the first point cloud structure;
the third obtaining module 705 is configured to obtain a second map with the interest point annotation completed based on the first map.
Optionally, the first reconstruction module 703 includes:
the extraction unit is used for extracting the image characteristics of each image in the first image set after the interest point is marked;
the characteristic matching unit is used for matching image characteristics based on the image characteristics of each image to obtain an image pair;
and the three-dimensional reconstruction unit is used for performing three-dimensional reconstruction based on the image pair and acquiring a first point cloud structure according to a reconstruction result.
Optionally, a three-dimensional reconstruction unit, comprising:
the position relation acquiring subunit is used for acquiring the position relation between a single image in the first image set and the image pair after the interest point is marked, wherein the position relation is embodied in a coordinate point form under the same coordinate system;
a point cloud structure obtaining subunit, configured to obtain a second point cloud structure based on the position relationship;
the optimization subunit is used for iteratively optimizing the second point cloud structure and acquiring a third point cloud structure with a controllable error range;
the filtering subunit is used for filtering point clouds reconstructed based on the wrong image feature pairs in the third point cloud structure;
and the determining subunit is used for responding to a condition that an end condition is met, using the filtered third point cloud structure as a first point cloud structure, wherein the end condition is the position relation between all images and image pairs in the first image set after the interest point is marked.
Optionally, the position relation obtaining subunit is configured to obtain a three-dimensional point cloud structure based on the initialized image pair; iteratively optimizing a three-dimensional point cloud structure; filtering the point cloud reconstructed based on the wrong image feature pairing in the optimized three-dimensional point cloud structure; and performing incremental registration on the preprocessed image pair based on the image pair in the first image set after the interest point is marked, and acquiring the position relation between the image in the first image set after the interest point is marked and the three-dimensional point cloud structure after the interest point is filtered and optimized based on the three-dimensional point cloud structure after the interest point is marked.
Optionally, the second obtaining module 704 is configured to obtain a first plane where the image capturing device is located based on a pose sequence of the image capturing device, where the pose sequence is obtained in a process of performing three-dimensional reconstruction based on a first image set after the interest point is labeled; and projecting the pose sequence of the first point cloud structure and the image acquisition device to a first plane, and acquiring a first map according to a projection result.
Optionally, the apparatus further comprises:
and the building module is used for reconstructing the map when the regional structure is changed.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring a second image set and marking the interest points in the second image set when the interest points are changed and the area structure is not changed, wherein the second image set comprises images of the interest point changed area and images of the interest point unchanged area, and the interest point unchanged area is connected with the interest point changed area;
the second reconstruction module is used for performing three-dimensional reconstruction on the basis of the second image set marked with the interest points and the first point cloud structure, and acquiring a fourth point cloud structure according to a reconstruction result;
and the updating module is used for updating the map based on the fourth point cloud structure.
Optionally, the updating module is configured to prune the replacement candidate frames in the fourth cloud structure, where the replacement candidate frames are determined based on a threshold; projecting the fourth point cloud structure with the deleted replacement candidate frames to the first plane, and acquiring a third map based on the projection result; and acquiring a fourth map based on the third map, wherein the fourth map is the map with the interest point labeling completed.
The device only needs to collect the images of the area of the map to be constructed in the process of constructing the map, the map can be obtained by three-dimensional reconstruction based on the collected images, a base map supplier is not relied on, and even if the latest base map is lacked, the map with high timeliness can be accurately constructed. Meanwhile, because the work of marking the interest points of the image is finished before the three-dimensional reconstruction, when the interest points of the map are marked subsequently, the construction of the map can be finished only based on the previously marked interest points, and the construction of the map is low in cost and high in efficiency.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application, where the server may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 801 and one or more memories 802, where the one or more memories 802 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 801, so as to enable the server to implement the method for constructing a map according to the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
Fig. 9 is a schematic structural diagram of an apparatus for constructing a map according to an embodiment of the present application. The device may be a terminal, and may be, for example: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. A terminal may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, a terminal includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 902 is used for storing at least one instruction, which is used for being executed by the processor 901 to enable the terminal to implement the method for constructing a map provided by the method embodiments in the present application.
In some embodiments, the terminal may further include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, disposed on the front panel of the terminal; in other embodiments, the number of the display panels 905 may be at least two, and the two display panels are respectively disposed on different surfaces of the terminal or are in a folding design; in other embodiments, the display 905 may be a flexible display, disposed on a curved surface or on a folded surface of the terminal. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones can be arranged at different parts of the terminal respectively. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The positioning component 908 is used to locate the current geographic Location of the terminal to implement navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 909 is used to supply power to each component in the terminal. The power source 909 may be alternating current, direct current, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal also includes one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal. For example, the acceleration sensor 911 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 901 can control the display screen 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 911. The acceleration sensor 911 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 912 can detect the body direction and the rotation angle of the terminal, and the gyroscope sensor 912 and the acceleration sensor 911 cooperate to acquire the 3D motion of the user on the terminal. The processor 901 can implement the following functions according to the data collected by the gyro sensor 912: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 913 may be disposed on a side frame of the terminal and/or under the display 905. When the pressure sensor 913 is disposed on the side frame of the terminal, the user's holding signal to the terminal may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at a lower layer of the display screen 905, the processor 901 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 914 is used for collecting a fingerprint of the user, and the processor 901 identifies the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 914 may be disposed on the front, back, or side of the terminal. When a physical key or a vendor Logo (trademark) is provided on the terminal, the fingerprint sensor 914 may be integrated with the physical key or the vendor Logo.
The optical sensor 915 is used to collect ambient light intensity. In one embodiment, the processor 901 may control the display brightness of the display screen 905 based on the ambient light intensity collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display brightness of the display screen 905 is increased; when the ambient light intensity is low, the display brightness of the display screen 905 is reduced. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
A proximity sensor 916, also known as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 916 is used to collect the distance between the user and the front face of the terminal. In one embodiment, when the proximity sensor 916 detects that the distance between the user and the front face of the terminal gradually decreases, the processor 901 controls the display 905 to switch from the bright screen state to the dark screen state; when the proximity sensor 916 detects that the distance between the user and the front face of the terminal gradually becomes larger, the display 905 is controlled by the processor 901 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of the apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, a computer device is also provided, the computer device comprising a processor and a memory, the memory having at least one computer program stored therein. The at least one computer program is loaded and executed by one or more processors to cause the computer apparatus to perform any of the above-described methods of constructing a map.
In an exemplary embodiment, there is also provided a computer-readable storage medium having at least one computer program stored therein, the at least one computer program being loaded and executed by a processor of a computer device to cause a computer to implement any one of the above-mentioned methods of constructing a map.
In one possible implementation, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform any one of the above-described methods of constructing a map.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of constructing a map, the method comprising:
acquiring a first image set, wherein the first image set comprises a plurality of images of an area of a map to be constructed;
marking interest points in the first image set;
performing three-dimensional reconstruction on the basis of the first image set marked with the interest points, and obtaining a first point cloud structure according to a reconstruction result;
obtaining a first map of the area based on the first point cloud structure;
and acquiring a second map with the interest point labeling completed based on the first map.
2. The method of claim 1, wherein the performing three-dimensional reconstruction based on the first image set after the interest point is labeled, and obtaining a first point cloud structure according to a reconstruction result comprises:
extracting the image characteristics of each image in the first image set after the interest points are marked;
performing image feature pairing based on the image features of the images to obtain an image pair;
and performing three-dimensional reconstruction based on the image pair, and acquiring a first point cloud structure according to a reconstruction result.
3. The method of claim 2, wherein the three-dimensional reconstruction based on the image pair, and obtaining a first point cloud structure from the reconstruction result comprises:
obtaining the position relation between a single image in the first image set after the interest point is marked and the image pair, wherein the position relation is embodied in a coordinate point form under the same coordinate system;
acquiring a second point cloud structure based on the position relation;
iteratively optimizing the second point cloud structure to obtain a third point cloud structure with a controllable error range;
filtering point clouds reconstructed based on wrong image feature matching in the third point cloud structure;
and in response to the end condition being met, taking the filtered third point cloud structure as a first point cloud structure, wherein the end condition is the position relation between all images in the first image set after the marked interest point is obtained and the image pair.
4. The method of claim 3, wherein obtaining the positional relationship between a single image in the first image set after the point of interest is labeled and the image pair comprises:
acquiring a three-dimensional point cloud structure based on the initialized image pair;
iteratively optimizing the three-dimensional point cloud structure;
filtering the point cloud reconstructed based on the wrong image feature pairing in the optimized three-dimensional point cloud structure;
and performing incremental registration on the preprocessed image pair based on the image pair in the first image set after the interest point is marked, and acquiring the position relation between the image in the first image set after the interest point is marked and the three-dimensional point cloud structure after the filtering optimization based on the three-dimensional point cloud structure after the filtering optimization.
5. The method of claim 3, wherein obtaining the first map of the area based on the first point cloud structure comprises:
acquiring a first plane where an image acquisition device is located based on a pose sequence of the image acquisition device, wherein the pose sequence is acquired in the process of performing three-dimensional reconstruction based on a first image set after the interest points are marked;
and projecting the pose sequence of the first point cloud structure and the image acquisition device to the first plane, and acquiring the first map according to a projection result.
6. The method according to any one of claims 1-5, wherein after the second map with the interest point label is obtained based on the first map, the method further comprises:
when the regional structure changes, the map is reconstructed.
7. The method according to any one of claims 1-5, wherein after the second map with the interest point label is obtained based on the first map, the method further comprises:
when the interest point is changed and the area structure is not changed, acquiring a second image set, and marking the interest point in the second image set, wherein the second image set comprises an image of the interest point changed area and an image of the interest point unchanged area, and the interest point unchanged area is connected with the interest point changed area;
performing three-dimensional reconstruction based on the second image set marked with the interest points and the first point cloud structure, and acquiring a fourth point cloud structure according to a reconstruction result;
and updating the map based on the fourth point cloud structure.
8. The method of claim 7, wherein updating the map based on the fourth point cloud structure comprises:
pruning replacement candidate frames in a fourth point cloud structure, the replacement candidate frames being determined based on a threshold;
projecting the fourth point cloud structure with the deleted replacement candidate frame to the first plane, and acquiring a third map based on a projection result;
and acquiring a fourth map based on the third map, wherein the fourth map is a map with interest point labeling completed.
9. An apparatus for constructing a map, the apparatus comprising:
the map construction method comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image set, and the first image set comprises a plurality of images of an area of a map to be constructed;
the marking module is used for marking the interest points in the first image set;
the first reconstruction module is used for performing three-dimensional reconstruction on the basis of the first image set marked with the interest points and obtaining a first point cloud structure according to a reconstruction result;
a second obtaining module, configured to obtain a first map of the area based on the first point cloud structure;
and the third acquisition module is used for acquiring a second map which finishes the interest point annotation based on the first map.
10. A computer device comprising a processor and a memory, the memory having stored therein at least one computer program, the at least one computer program being loaded and executed by the processor to cause the computer device to carry out a method of constructing a map as claimed in any one of claims 1 to 8.
11. A computer-readable storage medium, in which at least one computer program is stored, which is loaded and executed by a processor, to cause a computer to implement a method of constructing a map as claimed in any one of claims 1 to 8.
12. A computer program product comprising a computer program or instructions for execution by a processor to cause a computer to implement a method of constructing a map as claimed in any one of claims 1 to 8.
CN202111316167.5A 2021-11-08 2021-11-08 Map construction method, device, equipment and storage medium Pending CN114092655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111316167.5A CN114092655A (en) 2021-11-08 2021-11-08 Map construction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111316167.5A CN114092655A (en) 2021-11-08 2021-11-08 Map construction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114092655A true CN114092655A (en) 2022-02-25

Family

ID=80299283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111316167.5A Pending CN114092655A (en) 2021-11-08 2021-11-08 Map construction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114092655A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541574A (en) * 2023-07-07 2023-08-04 湖北珞珈实验室 Intelligent extraction method, device, storage medium and equipment for map sensitive information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541574A (en) * 2023-07-07 2023-08-04 湖北珞珈实验室 Intelligent extraction method, device, storage medium and equipment for map sensitive information
CN116541574B (en) * 2023-07-07 2023-10-03 湖北珞珈实验室 Intelligent extraction method, device, storage medium and equipment for map sensitive information

Similar Documents

Publication Publication Date Title
CN108682036B (en) Pose determination method, pose determination device and storage medium
CN108537845B (en) Pose determination method, pose determination device and storage medium
CN108682038B (en) Pose determination method, pose determination device and storage medium
WO2019233229A1 (en) Image fusion method, apparatus, and storage medium
CN110986930B (en) Equipment positioning method and device, electronic equipment and storage medium
CN111768454B (en) Pose determination method, pose determination device, pose determination equipment and storage medium
CN110134744B (en) Method, device and system for updating geomagnetic information
CN109166150B (en) Pose acquisition method and device storage medium
CN109862412B (en) Method and device for video co-shooting and storage medium
WO2022042425A1 (en) Video data processing method and apparatus, and computer device and storage medium
CN111897429A (en) Image display method, image display device, computer equipment and storage medium
CN112052354A (en) Video recommendation method, video display method and device and computer equipment
WO2022199102A1 (en) Image processing method and device
CN111928861B (en) Map construction method and device
CN114092655A (en) Map construction method, device, equipment and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN110990728A (en) Method, device and equipment for managing point of interest information and storage medium
CN113535039B (en) Method and device for updating page, electronic equipment and computer readable storage medium
CN111369684B (en) Target tracking method, device, equipment and storage medium
CN111583339A (en) Method, device, electronic equipment and medium for acquiring target position
CN112163062A (en) Data processing method and device, computer equipment and storage medium
CN111984755A (en) Method and device for determining target parking point, electronic equipment and storage medium
CN111539794A (en) Voucher information acquisition method and device, electronic equipment and storage medium
CN113592874A (en) Image display method and device and computer equipment
CN111158791A (en) Configuration file updating method, device and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination