CN114241039A - Map data processing method and device, storage medium and electronic equipment - Google Patents

Map data processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114241039A
CN114241039A CN202111520245.3A CN202111520245A CN114241039A CN 114241039 A CN114241039 A CN 114241039A CN 202111520245 A CN202111520245 A CN 202111520245A CN 114241039 A CN114241039 A CN 114241039A
Authority
CN
China
Prior art keywords
dimensional
map
data
image
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111520245.3A
Other languages
Chinese (zh)
Inventor
高爽
李姬俊男
郭彦东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111520245.3A priority Critical patent/CN114241039A/en
Publication of CN114241039A publication Critical patent/CN114241039A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides map data processing, a device, a medium and equipment, and relates to the technical field of computer vision. The method comprises the following steps: acquiring first map data and second map data; the first map data comprise a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprise a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images; generating a first map; generating a second map; placing the first map and the second map in the same coordinate system, and determining a third matching relation between the first position and posture data and the second position and posture data according to the position relation between the first map and the second map; and optimizing at least one of the first position and posture data and the second position and posture data based on the third matching relation, and fusing the first map data and the second map data according to the optimized position and posture data to obtain third map data. The method and the device can be used for rapidly and accurately updating and adjusting the three-dimensional point cloud map.

Description

Map data processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a map data processing method, a map data processing apparatus, a computer-readable storage medium, and an electronic device.
Background
With the rapid development of computer vision, three-dimensional maps have wide application in scenes such as visual navigation, mobile robots, automatic driving and the like. The three-dimensional map is generally obtained by acquiring an image of a scene and performing three-dimensional reconstruction based on the acquired image. In practical applications, visual localization often occurs after image acquisition and mapping. However, as time passes, the scene may change, for example, construction, posters, seasonal changes, weather changes, and the like, thereby causing a delay in the three-dimensional map information, resulting in a situation where the positioning accuracy is lowered or the positioning fails. Therefore, the three-dimensional map needs to be updated continuously to reduce the influence on the visual positioning.
In the prior art, when a three-dimensional map is fused and updated, a scene image is usually acquired again at a fixed position, new point cloud information is determined, and the old point cloud information is replaced by the new point cloud information, so that the three-dimensional map is updated. However, this method has high requirements for the newly acquired image and the new point cloud information, for example, the positions of the two image acquisitions need to be limited to be completely the same, the area for updating the point cloud information cannot be too large, and the efficiency and accuracy of map data processing are low, and it is difficult to apply the method to more scenes where three-dimensional maps are fused and updated.
Disclosure of Invention
The present disclosure provides a map data processing method, a map data processing apparatus, a computer-readable storage medium, and an electronic device, so as to at least improve to some extent the problems of low map data processing efficiency and accuracy when a three-dimensional map is fused and updated in the prior art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a map data processing method, including: acquiring first map data of a first scene and second map data of a second scene associated with the first scene; the first map data comprises a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprises a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images; generating a first map according to the first position data and a first matching relation between the first images; generating a second map according to the second position and posture data and a second matching relation between the second images; placing the first map and the second map in the same coordinate system, and determining a third matching relationship between the first position and posture data and the second position and posture data according to the position relationship between the first map and the second map; optimizing at least one of the first position and posture data and the second position and posture data based on the third matching relation, and fusing the first map data and the second map data according to the optimized first position and posture data to obtain third map data.
According to a second aspect of the present disclosure, there is provided a map data processing apparatus including: the map acquisition module is used for acquiring first map data of a first scene and second map data of a second scene related to the first scene; the first map data comprises a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprises a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images; the map generation module is used for generating a first map according to the first posture data and a first matching relation between the first images; generating a second map according to the second position and posture data and a second matching relation between the second images; the relationship determination module is used for placing the first map and the second map in the same coordinate system and determining a third matching relationship between the first position and posture data and the second position and posture data according to the position relationship between the first map and the second map; and the pose optimization module is used for optimizing at least one of the first pose data and the second pose data based on the third matching relationship, and fusing the first map data and the second map data according to the optimized first pose data and the optimized second pose data to obtain third map data.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the map data processing method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing executable instructions of the processor. Wherein the processor is configured to execute the map data processing method of the first aspect and possible implementations thereof via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
acquiring first map data of a first scene and second map data of a second scene associated with the first scene; the first map data comprise a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprise a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images; generating a first map according to the first position data and a first matching relation between the first images; generating a second map according to the second posture data and a second matching relation between the second images; placing the first map and the second map in the same coordinate system, and determining a third matching relation between the first position and posture data and the second position and posture data according to the position relation between the first map and the second map; and optimizing at least one of the first position data and the second position data based on the third matching relationship, and fusing the first map data and the second map data according to the optimized first position data and the optimized second position data to obtain third map data. On one hand, the exemplary embodiment provides a new map data processing method, a first map is generated through first pose data and a first matching relationship in first map data, a second map is generated through second pose data and a second matching relationship in second map data, a third matching relationship between the first map data and the second map is determined based on the first map and the second map from the viewpoint of a topological structure, and then the pose data is optimized according to the third matching relationship, so that the validity and reliability of the pose data before the map data is fused are ensured, interference and errors are eliminated to a certain extent, and the accuracy of the third map data generated by fusing the first map data and the second map data is further ensured; on the other hand, when the second scene is associated with the first scene, the exemplary embodiment may perform the fusion processing of the map data of the first scene and the second map data of the second scene, and compared with the prior art that when map fusion is performed, images with the same acquisition positions are required, and the fusion area cannot be too large, the exemplary embodiment does not have strict requirements on the acquisition positions of the first image and the second image, and can be applied to the fusion of the first map data and the second map data in various scenes, and the application range is wider.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 is a block diagram of an electronic device in the exemplary embodiment;
FIG. 2 shows a flowchart of a map data processing method in the present exemplary embodiment;
FIG. 3 shows a schematic representation of a triangulation principle;
FIG. 4 shows a schematic representation of a first map and a second map in this exemplary embodiment;
FIG. 5 illustrates a sub-flowchart of a map data processing method in the exemplary embodiment;
FIG. 6 illustrates another sub-flowchart of a map data processing method in the present exemplary embodiment;
FIG. 7 illustrates yet another sub-flowchart of a map data processing method in the exemplary embodiment;
FIG. 8 illustrates yet another sub-flowchart of a map data processing method in the exemplary embodiment;
9-12 illustrate a number of diagrams of traversal of the second atlas in this example embodiment;
FIGS. 13-14 show a number of schematics of unmatched second nodes in the present exemplary embodiment;
FIG. 15 is a still further sub-flowchart of a map data processing method in the present exemplary embodiment;
fig. 16 is a diagram showing a third matching relationship in the present exemplary embodiment;
FIG. 17 is a still further sub-flowchart of a map data processing method in the present exemplary embodiment;
fig. 18 is a diagram showing an application of map data processing in the present exemplary embodiment;
FIG. 19 is a flowchart of a map data processing method in the present exemplary embodiment;
fig. 20 is a block diagram showing a configuration of a map data processing apparatus according to the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Exemplary embodiments of the present disclosure provide a map data processing method. The application scenarios include but are not limited to: acquiring a scene image before the market environment is changed, establishing an original scene map, acquiring a section of successfully positionable scene image and a section of unsuccessfully positionable scene image to reconstruct the image after the market environment is changed, acquiring a new scene map, and updating the original scene of the market by fusing the original scene map and the new scene map so as to adapt to the scene change; or collecting images of different sub-scenes of the same target scene, respectively constructing and fusing the images to generate map data of the whole target scene, wherein scene intersections exist in the different sub-scenes, for example, the map data of the scenes in different areas in a market are fused, the scenes in the different areas have intersections, and the map data of the whole market are generated.
Exemplary embodiments of the present disclosure provide an electronic device for implementing a map data processing method. The server, which may be a terminal or a cloud, includes but is not limited to a computer, a smartphone, a wearable device (such as augmented reality glasses), a robot, an unmanned aerial vehicle, and the like. Generally, an electronic device includes a processor and a memory, wherein the memory is used for storing executable instructions of the processor and also storing application data, such as image data, video data, and the like; the processor is configured to perform the map data processing method via execution of the executable instructions.
The structure of the electronic device is exemplarily described below by taking the mobile terminal 100 in fig. 1 as an example. It will be appreciated by those skilled in the art that the configuration of figure 1 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes.
As shown in fig. 1, the mobile terminal 100 may specifically include: a processor 110, an internal memory 121, an external memory interface 122, a USB (Universal Serial Bus) interface 130, a charging management Module 140, a power management Module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication Module 150, a wireless communication Module 160, an audio Module 170, a speaker 171, a receiver 172, a microphone 173, an earphone interface 174, a sensor Module 180, a display 190, a camera Module 191, an indicator 192, a motor 193, a key 194, and a SIM (Subscriber identity Module) card interface 195.
Processor 110 may include one or more processing units, such as: the Processor 110 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc.
The encoder may encode (i.e., compress) image or video data, for example, a shot scene image to form corresponding code stream data, so as to reduce the bandwidth occupied by data transmission; the decoder may decode (i.e., decompress) the code stream data of the image or the video to restore the image or the video data, for example, decode the code stream data of the scene image to obtain complete image data, so as to facilitate the map data processing method according to the exemplary embodiment. The mobile terminal 100 may support one or more encoders and decoders. In this way, the mobile terminal 100 may process images or video in a variety of encoding formats, such as: image formats such as JPEG (Joint Photographic Experts Group), PNG (Portable Network Graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group) 1, MPEG2, h.263, h.264, and HEVC (High Efficiency Video Coding).
In one embodiment, processor 110 may include one or more interfaces through which connections are made to other components of mobile terminal 100.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include volatile memory and nonvolatile memory. The processor 110 executes various functional applications of the mobile terminal 100 and data processing by executing instructions stored in the internal memory 121.
The external memory interface 122 may be used to connect an external memory, such as a Micro SD card, for expanding the storage capability of the mobile terminal 100. The external memory communicates with the processor 110 through an external memory interface 122 to implement data storage functions, such as storing files of images, videos, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 100, or connect an earphone or other electronic devices.
The charging management module 140 is configured to receive charging input from a charger. While the charging management module 140 charges the battery 142, the power management module 141 may also supply power to the device; the power management module 141 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 100. The Wireless Communication module 160 may provide Wireless Communication solutions including WLAN (Wireless Local Area Networks, WLAN) (e.g., Wi-Fi (Wireless Fidelity, Wireless Fidelity)) Networks, BT (Bluetooth), GNSS (Global Navigation Satellite System), FM (Frequency Modulation), NFC (Near Field Communication), IR (Infrared technology), and the like, which are applied to the mobile terminal 100.
The mobile terminal 100 may implement a display function through the GPU, the display screen 190, the AP, and the like, and display a user interface. For example, when the user turns on a photographing function, the mobile terminal 100 may display a photographing interface, a preview image, and the like in the display screen 190.
The mobile terminal 100 may implement a photographing function through the ISP, the camera module 191, the encoder, the decoder, the GPU, the display screen 190, the AP, and the like. For example, the user may start a related service of mapping or visual positioning, trigger the start of the shooting function, and capture a scene image through the camera module 191, and perform positioning.
The mobile terminal 100 may implement an audio function through the audio module 170, the speaker 171, the receiver 172, the microphone 173, the earphone interface 174, the AP, and the like.
The sensor module 180 may include a depth sensor 1801, a pressure sensor 1802, a gyroscope sensor 1803, an air pressure sensor 1804, etc. to implement corresponding sensing detection functions.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The motor 193 may generate a vibration cue, may also be used for touch vibration feedback, and the like. The keys 194 include a power-on key, a volume key, and the like.
The mobile terminal 100 may support one or more SIM card interfaces 195 for connecting SIM cards to implement functions such as telephony and mobile communications.
Fig. 2 shows an exemplary flow of a map data processing method, comprising the following steps S210 to S240:
step S210, acquiring first map data of a first scene and second map data of a second scene related to the first scene; the first map data comprises a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprises a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images.
The first scene and the second scene may be any scenes, such as a mall, a street, and the like. The second scene has an association relationship with the first scene, and the association relationship between the second scene and the first scene may mean that the second scene and the first scene have a scene which is partially the same, that is, an intersection of the first scene and the second scene is not empty, for example, the first scene is a mall before decoration of a certain shop, and the second scene is a scene after decoration of the shop, wherein the shops before and after decoration have a scene which is partially the same, or the scenes of other shops except for the decorated shop are the same; or the first scene is a market which is not subjected to scene arrangement before the event is held, the second scene is a market which contains scene arrangement when the event is held or after the event is held, and the scene arrangement areas are partially the same; or a certain mall includes a shop a, a shop B and a shop C, the first scene is a scene including the shop a and the shop B, and the second scene is a scene including the shop B and the shop C.
The first map data comprises a set of first three-dimensional points, a first image and first pose data corresponding to the first image, wherein the set of first three-dimensional points refers to three-dimensional point cloud data related to a first scene, such as a three-dimensional point cloud map, the first pose data refers to pose data of a camera, such as a rotation matrix or a translation matrix, of each first image during acquisition, and different first images correspond to different first pose data; the second map data includes a set of second three-dimensional points, a second image, and second pose data corresponding to the second image, where the set of second three-dimensional points refers to three-dimensional point cloud data about the second scene, the second pose data refers to pose data of a camera at the time of acquisition of each second image, different second images correspond to different second pose data, and the second image may include an image that is successfully located in the first map data and an image that is not successfully located in the first map data. In addition, the first map data may further include information of the feature point in the first image, for example, location information of the feature point, or description information such as a local descriptor or a global descriptor, and the like, and the second map data is similar. In this exemplary embodiment, a plurality of first images may be collected for a first scene, a plurality of second images may be collected for a second scene, a three-dimensional reconstruction may be performed on the first scene based on the first images to obtain first map data, and a three-dimensional reconstruction may be performed on the second scene based on the second images to obtain second map data. In addition, the first map data relating to the first scene and the first map data relating to the second scene may be acquired directly from another terminal or a map data source.
In the present exemplary embodiment, the first map data may be constructed by using an SFM (Structure-From-Motion) algorithm or the like, and by collecting and extracting feature points in an image, performing image pose estimation, point cloud triangulation, global optimization, and other processing procedures, the two-dimensional first image is converted into three-dimensional information in a world coordinate system, so as to perform three-dimensional reconstruction processing on the first scene, and obtain a three-dimensional map of the first scene, which is typically three-dimensional point cloud data. The construction of the second map data is similar to that of the first map data, and the construction process of the first map data will be specifically described below as an example.
The three-dimensional reconstruction of the first scene into first map data may comprise the following processes:
firstly, feature point extraction and feature point matching are required to be carried out on a first image, and an initial first image matching pair is determined.
The feature points refer to representative points or regions with high identification in the image, such as corner points and boundaries in the image. In the first image, gradients at different positions can be detected, and feature points are extracted at positions with larger gradients. Generally, after the feature points are extracted, they need to be described, for example, pixel distribution features around the feature points are described by an array, which is called as description information (or descriptor, descriptor) of the feature points. The description information of the feature point may be regarded as local description information of the first image. The present exemplary embodiment may extract and describe Feature points using algorithms such as FAST (Features From estimated Segment Test, Features detected based on Accelerated segmentation), BRIEF (Binary Robust Independent basic Features), ORB (organized FAST and Rotated BRIEF, FAST Oriented and Rotated BRIEF), SIFT (Scale-Invariant Feature Transform), Speeded Up route Features, SuperPoint (Feature point detection and Descriptor extraction based on self-supervised learning), R2D2(Reliable and Repeatable Feature point and Descriptor), and the like.
In the present exemplary embodiment, the total number of matching point pairs of the feature points of each first image may be determined based on the matching relationship between each first image and other first images, for example, the first image a has a matching relationship with other m first images, the first image a and the m first images form m pairs of first image matching pairs, each pair of first image matching pairs has a matching point pair of corresponding feature points, and the number of matching point pairs is NiThen the total number of matching point pairs of the first image a and the m first images can be determined as
Figure BDA0003408417320000081
Wherein, the matching point pair refers to two first pointsThe feature points in the images are relatively similar and are considered as projections of the same object point on the two first images in the three-dimensional space of the first scene. Whether the two characteristic points are matched point pairs can be judged by calculating the similarity of the description information of the two characteristic points. Generally, description information of feature points may be represented as vectors, similarity between the description information vectors of two feature points is calculated, for example, measured by euclidean distance, cosine similarity, and the like, and if the similarity is higher than a preset similarity threshold, it is determined that two corresponding feature points have a matching relationship, so as to form a matching point pair. The similarity threshold is a criterion for measuring whether the two feature points are similar enough, and can be set according to experience or actual conditions. Then, all the first images may be sorted in a descending order according to the total number of matching point pairs of each first image to determine the image sequence of the first image in the first image matching pair, and then the image sequence of the first image is traversed to determine the image sequence of the second first image of the current first image in a descending order according to the matching number of feature points between the current first image and other first images. And finally, traversing the image sequence of the second first image, and determining the image sequence as an initial first image matching pair if the current first image and the current second first image satisfy the geometric constraint relation.
After the initial first image matching pair is obtained, image pose estimation can be carried out on the initial first image matching pair, the relative poses between two first images in the initial first image matching pair are determined, the matching point pairs of the first images are triangulated to obtain three-dimensional point cloud data, and finally the three-dimensional point cloud is optimized and filtered through polar line constraint, visual angle constraint and other conditions to find the first image to be reconstructed in the next frame.
It should be noted that the first image to be reconstructed in the next frame needs to satisfy the following requirements: the frame first image has not been previously reconstructed; the frame first image has enough visible points in the existing three-dimensional point cloud data; the number of previous attempts to reconstruct the first image of the frame does not exceed some preset threshold, for example 3. The first image that meets the above requirements will be pushed into the sequence of the first image to be reconstructed of the next frame. And finally, searching a two-dimensional-three-dimensional matching pair between the first image to be reconstructed and the existing three-dimensional Point cloud data, matching n characteristic points in the current three-dimensional Point cloud data with n characteristic points in the first image to be reconstructed by adopting a PnP (passive-n-Point) algorithm (a method for solving 3D-2D Point pair motion), and further solving the initial pose of the first image to be reconstructed.
Further, triangulation may be performed to determine coordinates of the three-dimensional feature points.
After the preliminary pose of the first image is determined, triangularization can be performed on the newly added two-dimensional and two-dimensional matching point pairs to generate new three-dimensional points. To explain the principle of triangulation by taking fig. 3 as an example, fig. 3 shows three first images, and it is assumed that the homogeneous coordinate of the three-dimensional point P in the world coordinate system is X ═ X, y, z, 1]TCorrespondingly, the projection points in the two first images are p respectively1And p2The coordinates of which in the respective camera coordinate system are
Figure BDA0003408417320000091
Figure BDA0003408417320000092
The projection matrixes of the cameras corresponding to the two first images are respectively P1And P2Wherein P is1=[P11,P12,P13]T,P2=[P21,P22,P23]T,P11、P12、P13Respectively correspond to the projection matrix P1Lines 1-3 of (1), P21、P22、P23Respectively correspond to the projection matrix P2Lines 1-3 of (A), in an ideal state, have
Figure BDA0003408417320000093
For the
Figure BDA0003408417320000094
On both sides of it, cross-multiplying itself, respectively, to obtain:
Figure BDA0003408417320000095
namely:
Figure BDA0003408417320000096
it is possible to obtain:
Figure BDA0003408417320000097
the formula (3) can be obtained by linear transformation from the formulas (1) and (2), so that under the condition that one first image corresponds to the camera view angle, two constraint conditions can be obtained, and the two constraint conditions are combined with the camera view angle corresponding to the other first image, so that: AX is 0, wherein:
Figure BDA0003408417320000098
a is the linear constraint matrix of the three-dimensional point P. For the above equation, when the number of view points is small and there is no external point, the matrix may be directly decomposed to obtain the coordinates of the three-dimensional point P, for example, SVD (Singular Value Decomposition) Decomposition may be adopted, and when there is an external point, another method, for example, RANSAC (Random sample consensus) method may be adopted for estimation.
Finally, the three-dimensional point cloud BA (Bundle Adjustment) is optimized.
After the first image of each frame is reconstructed, local BA optimization may be performed. When the number of the reconstructed pictures is increased by a fixed number or the number of the reconstructed three-dimensional points is increased by a fixed number, global BA optimization can be performed on all the previous reconstructions to obtain complete three-dimensional point cloud data, namely first map data.
Step S220, generating a first map according to the first pose data and a first matching relation between the first images; and generating a second map according to the second posture data and a second matching relation between the second images.
In this exemplary embodiment, after the first map data is obtained, the first map may be generated according to a first matching relationship between the first images in the plurality of first images and the first pose data corresponding to the first images. The first matching relationship refers to a matching relationship between the first image and the first image, for example, a matching relationship determined based on global description information or the number of matching points of the first image, and the first atlas refers to a data structure used to reflect an association relationship between the corresponding first images by different first pose data. The second map data may include a plurality of second images, each of the second images may correspond to the second pose data, and similarly, the second map may be generated according to a second matching relationship between the second images and the second pose data corresponding to the second images. The second map is a data structure for reflecting the association relationship between the corresponding second images through different second pose data.
In an exemplary embodiment, the step S220 may include:
generating a first map by taking the first pose data as first nodes and taking a first matching relationship between the first images as edges between the first nodes; and generating a second map by taking the second posture data as second nodes and taking a second matching relationship between the second images as edges between the second nodes.
The graph refers to a mesh-shaped data structure, which is composed of a non-empty vertex set V and a set E describing the relationship between vertices, wherein data in the vertex set V is used for forming different entities of the graph, i.e., nodes of the graph, and data in the relationship set E is used for describing the relationship between different nodes, i.e., an arc or an edge from a node to a node in the graph. In the present exemplary embodiment, the terminal may generate the first map using the first pose data as the first node and the first matching relationship between the first images as the edge between the first nodes, and generate the second map using the second pose data as the second node and the second matching relationship between the second images as the edge between the second nodes, after acquiring the first map data and the second map data. The first node is a node constituting the first graph, and is different from the second node, and the "first" and "second" do not refer to numbers or sequences.
Next, referring to fig. 4, a generation process of the first map and the second map is illustrated, as shown in fig. 4, white circles are first nodes, and each white circle represents first pose data, that is, a camera pose, corresponding to one first image; the black circles are second nodes, and each black circle represents second position and orientation data corresponding to one second image. The first images with the matching relationship are connected through a dotted line, for example, first nodes represented by first pose data corresponding to two first images with similarity exceeding a preset similarity threshold can be connected through a dotted line; the second images having a matching relationship are connected by a solid line, and based on this, the first map M and the second map N shown in fig. 4 are generated.
In order to facilitate the subsequent optimization of the pose data, the coordinate transformation parameters of the first map data and the second map data can be determined, the first pose data in the first map data and the second pose data in the second map are converted to the same coordinate system based on the coordinate transformation parameters, then the first map is generated according to the converted first pose data and the first matching relationship, and the second map is generated according to the second pose data and the second matching relationship, so that the first map and the second map are arranged in the same coordinate system.
In addition, after the first map and the second map are generated, the first map and the second map may be placed in the same coordinate system. Specifically, in an exemplary embodiment, as shown in fig. 5, the map data processing method may further include:
step S510, a first three-dimensional point subset is obtained from first map data, a second three-dimensional point subset is obtained from second map data, and the first three-dimensional point subset and the second three-dimensional point subset have a matching relation;
step S520, determining coordinate transformation parameters between the first map data and the second map data according to the three-dimensional distribution characteristics of the first three-dimensional point subset and the second three-dimensional point subset; the coordinate transformation parameters are used for placing the first map and the second map in the same coordinate system.
Wherein the first map data includes a first three-dimensional point subset, denoted as p' ═ { p ═ p1’…pi’…pn', a second subset of three-dimensional points, denoted as q' ═ q, is included in the second map data1’…qi’…qn' } the first subset of three-dimensional points having a matching relationship with the second subset of three-dimensional points, e.g. pi' and qi' is a pair of matching point pairs.
Since the three-dimensional points in the first map data and the second map data are in different coordinate systems, the position representations of the three-dimensional points have differences, for example, the coordinates of two three-dimensional points in the matching point pair are different. The overall distribution characteristics of the three-dimensional points in the three-dimensional point subset are the same or similar, such as the density distribution, shape distribution, or distance distribution between the three-dimensional points and the three-dimensional points, etc. Therefore, the present immobility embodiment may determine the coordinate transformation parameters of the first map data and the second map data according to the three-dimensional distribution characteristics of the first three-dimensional point subset and the second three-dimensional point subset, so as to place the first map and the second map in the same coordinate system through the coordinate transformation parameters.
In an exemplary embodiment, as shown in fig. 6, the step S510 may include:
step S610, acquiring a two-dimensional-two-dimensional matching point pair between the first image and the second image;
step S620, determining a three-dimensional-three-dimensional matching point pair between the first three-dimensional point and the second three-dimensional point according to the two-dimensional-two-dimensional matching point pair;
step S630, a first three-dimensional point subset is formed by a first three-dimensional point in the three-dimensional-three-dimensional matching point pair, and a second three-dimensional point subset is formed by a second three-dimensional point in the three-dimensional-three-dimensional matching point pair.
In the present exemplary embodiment, since the first scene intersects with the second scene, an image having a matching relationship with the second image, for example, a more similar image, exists in the first image. The exemplary embodiment may determine the first image matching the second image by means of image retrieval. Further, a two-dimensional matching point pair between the first image and the second image is obtained.
In an exemplary embodiment, step S610 includes:
and aiming at least one second image to be matched in the second images, searching at least one first image similar to the first image in the first images, and performing characteristic point matching on the at least one first image and the second image to be matched to obtain a two-dimensional and two-dimensional matching point pair.
That is, one or more second images may be selected from the second images to be used as second images to be matched, and the second images are searched in all the first images to determine the first image similar to the second image to be matched. The global description information refers to information formed by extracting features of the whole image, for example, CNNs (e.g., NetVLAD) including a VLAD (Vector of Locally Aggregated Descriptors) layer may be used to extract global description information, such as a 4096-dimensional global description Vector, from the first image and the second image to be matched, and then calculate a similarity between the global description Vector of the second image to be matched and the global description Vector of the first image, for example, the similarity may be measured by a euclidean distance, a cosine similarity, or the like, and if the similarity is higher than a preset similarity threshold, it is determined that the second image to be matched and the first image are matched or a similarity condition is reached; or, the L2 norm may also be used to calculate the similarity between the second image to be matched and the global description information of the first image, where the smaller the L2 norm of the two global description information is, the more similar the corresponding second image to be matched and the first image are, and so on.
Further, feature point matching may be performed on at least one first image and a second image to be matched, and the specific matching manner may be that description information of the feature points to be matched is expressed as a vector, and similarity between description information vectors of two feature points and the like are calculated, so that a two-dimensional to two-dimensional matching point pair between the first image and the second image may be determined.
In the present exemplary embodiment, the first map data includes a first three-dimensional point, and the first image, and the second map data includes a second three-dimensional point and a second image, and therefore, each two-dimensional point in the two-dimensional image may have a corresponding three-dimensional point in the three-dimensional point set. Based on this, in the present exemplary embodiment, the two-dimensional to two-dimensional matching point pair may be converted into a three-dimensional to two-dimensional matching point pair, or the three-dimensional to two-dimensional matching point pair may be determined by calculating pose data of the second image, and then the three-dimensional to three-dimensional matching point pair between the first three-dimensional point and the second three-dimensional point may be determined according to the three-dimensional to two-dimensional matching point pair. Finally, according to the three-dimensional matching point pair, a first three-dimensional point subset can be determined in the first three-dimensional point set, and a second three-dimensional subset can be determined in the second three-dimensional point set.
Specifically, in an exemplary embodiment, as shown in fig. 7, the step S620 may include:
step S710, determining pose data of the second image in a first world coordinate system according to the two-dimensional and two-dimensional matching point pairs; the first world coordinate system is a world coordinate system of the first map data;
step S720, utilizing the pose data of the second image in the first world coordinate system to perform back projection on the plurality of first three-dimensional points to obtain three-dimensional-two-dimensional matching point pairs between the plurality of first three-dimensional points and the corresponding two-dimensional points in the second image;
step S730, mapping the two-dimensional point in the three-dimensional-two-dimensional matching point pair to a second three-dimensional point, so as to obtain a three-dimensional-three-dimensional matching point pair.
After the two-dimensional-two-dimensional matching point pairs are determined, the present exemplary embodiment may calculate pose data of the second image in a first world coordinate system according to the two-dimensional-two-dimensional matching points, for example, may calculate pose data of the second image by using a PnP algorithm, where the first world coordinate system is a coordinate system in which the set of the first three-dimensional points in the first map data is located. And then, by using the pose data of the second image in the first world coordinate system, the plurality of first three-dimensional points can be subjected to back projection to obtain three-dimensional-two-dimensional matching point pairs between the plurality of first three-dimensional points and the corresponding two-dimensional points in the second image. And finally, according to the corresponding relation between each two-dimensional point in the three-dimensional-two-dimensional matching point pair and the three-dimensional point in the set of the second three-dimensional point, mapping the two-dimensional point in the three-dimensional-two-dimensional matching point pair to the second three-dimensional point to obtain the three-dimensional-three-dimensional matching point pair.
In order to obtain more accurate pose data of the second image, the calculated pose data may be clustered by using a clustering method, and in an exemplary embodiment, as shown in fig. 8, in step S710, determining the pose data of the second image in the first world coordinate system according to the two-dimensional to two-dimensional matching point pairs may include the following steps:
mapping two-dimensional points of a first image in the two-dimensional-two-dimensional matching point pair into first three-dimensional points, and determining a plurality of candidate poses of a second image in a first world coordinate system based on the matching relation between the two-dimensional points and the first three-dimensional points of the second image;
clustering the candidate poses, and determining pose data of the second image in the first world coordinate system according to the optimal class.
In practical application, N pose data of the second image in the first world coordinate system can be calculated, the N pose data are candidate poses, further, clustering is performed on a plurality of candidate poses, and pose data of the second image in the first world coordinate system is determined according to an optimal class, wherein the optimal class is a class with the most pose data in a clustering result. For example, the N pose data obtained by calculation are clustered, the most candidate poses in the optimal class are used as the optimal class, then weighted average calculation is performed on all candidate poses in the optimal class, and the calculation result is used as pose data of the second image in the first world coordinate system.
The present exemplary embodiment can acquire the most likely camera pose by clustering pose data. The Clustering method in the present exemplary embodiment is not specifically limited, and for example, a DBSCAN (Density-Based Clustering of Applications with Noise) Clustering method or the like may be used.
In an exemplary embodiment, step S620 may further include:
and acquiring a first three-dimensional point corresponding to the two-dimensional point in the first image associated with the optimal class to obtain a plurality of first three-dimensional points for back projection.
In order to obtain a more accurate three-dimensional-two-dimensional matching point pair, the exemplary embodiment may obtain a first three-dimensional point corresponding to a two-dimensional point in the first image associated with the optimal class, so as to obtain a three-dimensional-two-dimensional matching point pair between the plurality of first three-dimensional points and a corresponding two-dimensional point in the second image when the plurality of first three-dimensional points are back-projected.
In an exemplary embodiment, the coordinate transformation parameters may include rotation parameters and translation parameters; as shown in fig. 8, the step S520 of determining the coordinate transformation parameter between the first map data and the second map data according to the three-dimensional distribution characteristics of the first three-dimensional point subset and the second three-dimensional point subset may include the following steps:
step S810, determining a first central point of the first three-dimensional point subset and a second central point of the second three-dimensional point subset;
step S820, based on the first central point, the first three-dimensional point subset is decentralized, and based on the second central point, the second three-dimensional point subset is decentralized;
step S830, determining rotation parameters according to the covariance between the first three-dimensional point subset and the second three-dimensional point subset after the decentralization;
and step 840, determining a scale relation according to the first three-dimensional point subset and the second three-dimensional point subset after the decentralization, and determining a translation parameter according to the scale relation, the rotation parameter, the first central point and the second central point.
Obtaining a first three-dimensional point subset p' ═ p1’…pi’…pn'} and a second three-dimensional point subset q' ═ q { (q)1’…qi’…qn' } thereafter, a first center point of the first subset of three-dimensional points and a second center point of the second subset of three-dimensional points may be determined, wherein a center point may refer to a centroid point of the subset of three-dimensional points, denoted by PcRepresenting a first center point of a first three-dimensional point subset, with qcA second center point representing a second subset of three-dimensional points. Then, based on the first central point, the first three-dimensional point subset is decentralized, for example, the coordinate of the first central point is subtracted from the coordinate of each three-dimensional point in the first three-dimensional point subset, so that all three-dimensional points in the first three-dimensional point subset are translated towards the origin, and a new decentralized first three-dimensional point subset p ═ { p ═ is obtained1…pi…pn}; similarly, the second three-dimensional point subset may be de-centered based on the second central point, and a new de-centered second three-dimensional point subset q ═ q may be obtained1…qi…qn}。
Then, the rotation parameter may be determined according to the covariance between the first three-dimensional point subset and the second three-dimensional point subset after the decentralization, and specifically, the determining the relative scale of the first three-dimensional point subset and the second three-dimensional point subset after the decentralization is that:
Figure BDA0003408417320000141
the following covariance matrix was then constructed:
Figure BDA0003408417320000142
by decomposing the above covariance matrix, e.g. by SVD, one can obtain: h ═ U ∑ VTWhen R is ═ VUTWhen RH is equal to V Σ VTLet A be V Σ1/2Then RH ═ AATTherefore, it can be confirmed that R ═ VUTIn order to be a matrix of rotations,the rotation matrix is the rotation parameter.
Finally, on the basis of determining the scale relationship, the rotation parameter, and the first and second center points, a translation parameter, such as a translation vector t, may be determined by the following formula:
t=sRqc-pc (7)
where s denotes a scale relationship, e.g. relative scale, R denotes a rotation matrix, qcAnd PcRespectively representing a first centre point and a second centre point.
Further, the first atlas and the second atlas can be placed in the same coordinate system through the determined rotation parameter and translation parameter, and rigid registration of the first map data and the second map data is achieved.
Step S230, the first map and the second map are placed in the same coordinate system, and a third matching relation between the first position and posture data and the second position and posture data is determined according to the position relation between the first map and the second map.
In the present exemplary embodiment, the first matching relationship refers to a matching relationship between a plurality of first images, and when the first pose data is taken as the first node in the first map, may be embodied on an edge connection relationship between the first nodes representing the first pose data; the second matching relationship is a matching relationship between a plurality of second images, and when the second pose data is used as the second node in the second map, the second matching relationship may be embodied in an edge connection relationship between the second nodes representing the second pose data. The third matching relationship is different from the first matching relationship and the second matching relationship, and is a matching relationship between the first image and the second image, that is, a matching relationship between the first node in the first map and the second node in the second map, and can be determined by a position relationship between the first map and the second map. Specifically, after the first map and the second map are placed in the same coordinate system, the third matching relationship may be determined by calculating the matching degree between the first node and the second node.
In an exemplary embodiment, in step S230, the determining a third matching relationship between the first posture data and the second posture data according to the position relationship between the first map and the second map may include:
and traversing the second map to obtain a third matching relationship according to the matching relationship between each second node and the first nodes in the preset range around the second node.
The present exemplary embodiment may traverse each second node in the second graph to determine a third matching relationship according to the matching relationship of each second node with the first nodes within the preset range around the second node. Specifically, a second node may be initialized in the second graph, for example, a second node is randomly determined, or a second node is determined according to a preset order rule, and so on, starting from the second node, a first node that best matches the second node is found in the first graph, and other second nodes are sequentially traversed.
In an exemplary embodiment, traversing the second graph may include:
acquiring an initial matching node pair between a first map and a second map; the initial matching node pair comprises a first node and a second node;
and traversing the second graph by taking the second node in the initial matching node pair as a root node.
After the initial matching node pair starts to be matched, the first node and the second node of the first pair have a matching relationship, as shown in fig. 9, the first node M1 in the first graph M and the second node N1 in the second graph N have a matching relationship, and then the first node M1 and the second node N1 are the initial matching node pair. Further, the second node in the initial matching node pair may be used as a root node to traverse the second graph, that is, to search other first nodes having matching relationships in the first graph. The root node is a starting node of the second graph, that is, in the second graph, when searching upwards from the second node as the root node, there is no parent node having a matching relationship with the first node. As shown in fig. 10-12, traversal is started with the second node N1 bit root node, and a first node M2 and a second node N2, a first node M3 and a second node N3 having a matching relationship are obtained; a first node M4 and a second node N4; and a first node M5 and a second node N5.
In an exemplary embodiment, the obtaining an initial matching node pair between the first atlas and the second atlas may include:
determining a second reference image in the second image, and searching a first reference image with the highest matching degree with the second reference image in the first image;
and forming an initial matching node pair by the first node corresponding to the first reference image and the second node corresponding to the second reference image.
The second reference image may refer to a second image corresponding to a second node that performs matching calculation first when traversing the second graph is started, and in this exemplary embodiment, the first reference image may be determined by performing a global search in the first image to determine a first reference image with the highest matching degree with the second reference image, specifically, the first reference image may be determined by calculating a similarity between the second reference image and the first image, and when the similarity exceeds a preset similarity threshold, the first image is considered to be matched with the second reference image, and the first image is determined to be the first reference image, and further, the first node corresponding to the first reference image and the second node corresponding to the second reference image may form an initial matching node pair, such as the initial matching node pair formed by the first node M1 and the second node N1 shown in fig. 9.
In the present exemplary embodiment, as indicated by the oval dashed area in fig. 9-12. If the matched nodes can be found in the area, the matching of the initial nodes is also accurate in the topological structure level. If the pair of nodes which are matched with each other cannot be found, the initial matching confidence is low, noise is likely to occur, or the scene is similar, and an incorrect matching is generated, at this time, the initial first node or the initial second node can be reselected.
When traversing the second graph with the second node in the initial matching node pair as the root node, when the child node of the root node does not search for a matching first node in the search range, as shown by the second node N6 in the left-side rectangular dotted area of fig. 13, and no matching first node is searched in the area, the second node N6 is traversed, and may be marked as v, but no match is generated. When there is a first node in the search area, but no matching relationship is generated, as shown by the second node N7 and the first node M7 in the right-side rectangular dashed area of fig. 14, this also belongs to no matching information, and is marked as v. When all the second nodes in the second graph are traversed, a new edge will be generated between the images that generate the matching relationship, i.e., a new edge will be generated between the first node having the matching relationship and the second node, e.g., a new edge will be generated between the first node M1 and the second node N1.
It should be noted that, in the present exemplary embodiment, in addition to traversing the second graph, the first graph may also be traversed, and the third matching relationship may be determined according to the matching relationship between each first node and the second node within the preset range around the first node, where the method flow is similar to the traversal process, and is not described herein again.
Step S240, based on the third matching relationship, at least one of the first pose data and the second pose data is optimized, and the first map data and the second map data are fused according to the optimized first pose data and the optimized second pose data, so as to obtain third map data.
Considering that the first map data may be obtained by performing three-dimensional reconstruction according to the first image, and the second map data may also be obtained by performing three-dimensional reconstruction according to the second image, there are processes of converting two-dimensional information into three-dimensional information, and then if the first map and the second map are directly placed in the same coordinate system, certain errors may be generated when the three-dimensional information and the three-dimensional information are converted. Therefore, the present exemplary embodiment may optimize at least one of the first pose data and the second pose data based on the third matching relationship, and fuse the first map data and the second map data based on the optimized first pose data and the optimized second pose data to obtain the third map data. The third map data refers to map data after fusing the first map data and the second map data, and may be three-dimensional point cloud data.
In an exemplary embodiment, as shown in fig. 15, the optimizing at least one of the first and second pose data based on the third matching relationship in step S240 may include the following steps:
step S1510, determining a first pose transformation parameter corresponding to the third matching relationship according to the first pose data corresponding to the first node and the second pose data corresponding to the second node in the third matching relationship;
step S1520, determining a second pose transformation parameter corresponding to the third matching relationship according to an edge between the first node and the second node in the third matching relationship;
in step S1530, at least one of the first and second pose data is optimized based on the difference between the first and second pose transformation parameters.
In the present exemplary embodiment, the third matching relationship may be obtained by traversing the second graph, for example, after the traversing process of the second graph in fig. 9-12, a new edge is generated between the first node and the second node generating the matching relationship, and the edge can represent the third matching relationship.
As shown in fig. 16, the nodes included in the region 1610 and the region 1620 are both the first node or the second node having the matching relationship after traversal, where the node included in the region 1610 is the first node and the node included in the region 1620 is the second node. The first nodes with matching relationship have connection relationship, the first matching relationship is the first matching relationship, the second nodes with matching relationship have connection relationship, the second matching relationship is the second matching relationship, the first nodes with matching relationship and the second nodes generate new connection relationship, and the third matching relationship is the third matching relationship.
Then, a first pose transformation parameter corresponding to the third matching relationship may be determined according to the first pose data corresponding to the first node and the second pose data corresponding to the second node in the third matching relationship, and specifically, may be determined by XiRepresenting first position data corresponding to the first node by XjIndicating that the second node correspondsSecond position data of, by Xi -1XjCharacterizing a first pose transformation parameter, which may be a summed pose transformation parameter of all relationships. Determining a second attitude transformation parameter corresponding to the third matching relation according to the edge between the first node and the second node in the third matching relation, and passing through TijAnd (6) performing characterization. Finally, at least one of the first and second pose data may be optimized based on a difference between the first and second pose transformation parameters, for example by approximating the difference to 0, to optimize the first or second pose data.
In an exemplary embodiment, as shown in fig. 17, the map data processing method may further include:
step S1710, determining a confidence of the third matching relationship according to the number of matching interior points between the first image corresponding to the first node in the third matching relationship and the second image corresponding to the second node;
step S1530 may include:
step S1720, optimizing at least one of the first and second pose data based on a difference between the first and second pose transformation parameters and a confidence of the third matching relationship.
The exemplary embodiment can further optimize at least one of the first and second pose data in combination with the confidence of the third matching relationship. Specifically, the optimization equation can be constructed by the following formula:
Figure BDA0003408417320000171
wherein, XiRepresenting first position data, X, corresponding to a first nodejRepresenting second position data, T, corresponding to a second nodeijRepresents the second pose transformation parameter calculated by the third matching relationship, i.e. the newly generated edge, Ω, of the first node and the second node in fig. 16ijRepresenting the second attitude transformation parameter TijThe degree of confidence of (a) is,in this exemplary embodiment, the confidence may be determined according to the number of matching inner points between the first image corresponding to the first node and the second image corresponding to the second node in the third matching relationship, where the higher the number of inner points is, the higher the confidence is, the lower the number of inner points is, and the sum of the confidence is small. The joint optimization of the first posture data or the second posture data or the first posture data and the second posture data can be realized through the formula (8), and errors caused by directly placing the first map and the second map in the same coordinate system are eliminated.
In summary, in the present exemplary embodiment, first map data of a first scene and second map data of a second scene associated with the first scene are acquired; the first map data comprise a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprise a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images; generating a first map according to the first position data and a first matching relation between the first images; generating a second map according to the second posture data and a second matching relation between the second images; placing the first map and the second map in the same coordinate system, and determining a third matching relation between the first position and posture data and the second position and posture data according to the position relation between the first map and the second map; and optimizing at least one of the first position data and the second position data based on the third matching relationship, and fusing the first map data and the second map data according to the optimized first position data and the optimized second position data to obtain third map data. On one hand, the exemplary embodiment provides a new map data processing method, a first map is generated through first pose data and a first matching relationship in first map data, a second map is generated through second pose data and a second matching relationship in second map data, a third matching relationship between the first map data and the second map is determined based on the first map and the second map from the viewpoint of a topological structure, and then the pose data is optimized according to the third matching relationship, so that the validity and reliability of the pose data before the map data is fused are ensured, interference and errors are eliminated to a certain extent, and the accuracy of the third map data generated by fusing the first map data and the second map data is further ensured; on the other hand, when the second scene is associated with the first scene, the exemplary embodiment may perform the fusion processing of the map data of the first scene and the second map data of the second scene, and compared with the prior art that when map fusion is performed, images with the same acquisition positions are required, and the fusion area cannot be too large, the exemplary embodiment does not have strict requirements on the acquisition positions of the first image and the second image, and can be applied to the fusion of the first map data and the second map data in various scenes, and the application range is wider.
In an exemplary embodiment, the map data processing method may further include:
acquiring first position data and second position data which have similar relations, and forming an image pair to be compared by a first image corresponding to the first position data and a second image corresponding to the second position data;
and in response to the fact that the similarity of the two images in the image pair to be compared is smaller than a preset threshold value, deleting the image with the later shooting time in the image pair to be compared and the first three-dimensional point or the second three-dimensional point corresponding to the image from the third map data.
Through the above steps S210 to S240, the first map data and the second map data may be fused to obtain the third map data, but the first node and a part of the three-dimensional points in the first map data are still retained in the third map data, and if the third map data are not removed, the third map data become larger and larger as the number of times of fusion increases, and the efficiency and the accuracy of the positioning algorithm are affected because too much interference information is included therein. Therefore, the present exemplary embodiment can generate more accurate third map data by deleting the interfering first three-dimensional point or second three-dimensional point.
Specifically, the confidence of the first posture data or the second posture data may be determined according to the acquisition time of the first image or the second image, and the higher the confidence of the posture data with the newer acquisition time in the map data is set, the lower the confidence of the posture data with the older acquisition time is set, where the first posture data is the first node and the second posture data is the second node.
The method comprises the steps of obtaining first position data and second position data with a similar relation, enabling a first image corresponding to the first position data and a second image corresponding to the second position data to form an image pair to be compared, comparing the similarity of the two images in the image pair to be compared, and deleting an image with later shooting time in the image pair to be compared and a first three-dimensional point or a second three-dimensional point corresponding to the image from third map data when the similarity is smaller than a preset threshold value.
Specifically, the similarity comparison can be directly performed by extracting the global description information of the first image corresponding to the first posture data or the global description information of the second image corresponding to the second posture data, and if the similarity of the two images in the pair of images to be compared meets a certain threshold, it is indicated that the scene change in the two images to be compared is small and does not need to be deleted. If the similarity of the two images in the image pair to be compared is weak, the three-dimensional points of the images with the closer shooting time can be reserved according to the confidence of the attitude data, and then the attitude data with the bottom of the confidence and the corresponding three-dimensional points are deleted. After the deletion is completed, BA optimization can be performed once to obtain an updated point cloud map.
As shown in fig. 18, fig. 18(a) is map data of an outdoor area and a small portion of an indoor area of a certain store, which may be regarded as first map data, and the indoor map data is collected again as needed, which may be regarded as a second map, as shown in fig. 18(b), and the two map data are successfully merged and updated into new map data by using the map data processing method in the present exemplary embodiment, as shown in fig. 18 (c). The updated map data stores indoor and outdoor information, eliminates indoor poster transformation areas and is updated to the latest map.
Fig. 19 shows a flowchart of a map data processing method in the present exemplary embodiment, which specifically includes four parts, respectively: a map data reconstruction module 1910, a point cloud registration module 1920, a pose joint optimization module 1930, and a map update module 1940.
The map data reconstruction module 1910 is configured to perform three-dimensional reconstruction according to the acquired image to generate three-dimensional point cloud data, and may specifically include a feature point extraction and matching unit 1911, an image pose estimation unit 1912, a triangulation unit 1913, and a BA optimization unit 1914.
The point cloud registration module 1920 is configured to determine a coordinate transformation parameter, so as to place the first map data and the second map data in the same coordinate system, or place the first map and the second map in the same coordinate system, and specifically may include a visual positioning unit 1921, a three-dimensional-three-dimensional point pair matching unit 1922, and a rigid transformation unit 1923, where the rigid transformation unit 1923 is a unit that performs coordinate system transformation through the coordinate transformation parameter.
The pose joint optimization module 1930 is configured to optimize at least one of the first pose data and the second pose data, and may specifically include a topology searching unit 1931 configured to traverse the second graph to obtain a third matching relationship according to a matching relationship between each second node and the first nodes within a preset range around the second node, a pose transformation parameter determining unit 1932, and a joint optimization unit 1933.
The map updating module 1940 is configured to update the third map data generated by fusing to eliminate the interfering three-dimensional points, and specifically may include: a temporal confidence determination unit 1941, an image similarity calculation unit 1942, and a three-dimensional point cloud culling unit 1943.
Exemplary embodiments of the present disclosure also provide a map data processing apparatus. As shown in fig. 20, the map data processing apparatus 2000 may include: a map acquiring module 2010, configured to acquire first map data of a first scene and second map data of a second scene associated with the first scene; the first map data comprise a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprise a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images; the atlas generation module 2020 is used for generating a first atlas according to the first pose data and the first matching relationship between the first images; generating a second map according to the second posture data and a second matching relation between the second images; the relationship determining module 2030, configured to place the first map and the second map in the same coordinate system, and determine a third matching relationship between the first pose data and the second pose data according to a position relationship between the first map and the second map; the pose optimization module 2040 is configured to optimize at least one of the first pose data and the second pose data based on the third matching relationship, and fuse the first map data and the second map data according to the optimized first pose data and the optimized second pose data to obtain third map data.
In an exemplary embodiment, the atlas generation module includes: the map generation unit is used for generating a first map by taking the first pose data as first nodes and taking a first matching relationship between the first images as edges between the first nodes; and generating a second map by taking the second posture data as second nodes and taking a second matching relationship between the second images as edges between the second nodes.
In an exemplary embodiment, the relationship determination module includes: and the traversing unit is used for traversing the second map so as to obtain a third matching relationship according to the matching relationship between each second node and the first nodes in the preset range around the second node.
In an exemplary embodiment, the traversal unit includes: the node pair obtaining subunit is used for obtaining an initial matching node pair between the first map and the second map; the initial matching node pair comprises a first node and a second node; and the graph traversal subunit is used for traversing the second graph by taking the second node in the initial matching node pair as a root node.
In an exemplary embodiment, the node pair obtaining subunit includes: the reference image searching subunit is used for determining a second reference image in the second image and searching a first reference image which has the highest matching degree with the second reference image in the first image; and the node pair forming subunit is used for forming an initial matching node pair by the first node corresponding to the first reference image and the second node corresponding to the second reference image.
In an exemplary embodiment, the pose optimization module includes: the first parameter determining unit is used for determining a first posture transformation parameter corresponding to the third matching relationship according to the first posture data corresponding to the first node and the second posture data corresponding to the second node in the third matching relationship; the second parameter determining unit is used for determining a second posture transformation parameter corresponding to the third matching relationship according to the edge between the first node and the second node in the third matching relationship; and the pose optimization unit is used for optimizing at least one of the first pose data and the second pose data based on the difference between the first pose transformation parameter and the second pose transformation parameter.
In an exemplary embodiment, the map data processing apparatus may further include: the confidence coefficient determining module is used for determining the confidence coefficient of the third matching relationship according to the number of matched inner points between the first image corresponding to the first node in the third matching relationship and the second image corresponding to the second node; and the pose optimization module is used for optimizing at least one of the first pose data and the second pose data based on the difference between the first pose transformation parameter and the second pose transformation parameter and the confidence coefficient of the third matching relation.
In an exemplary embodiment, the map data processing apparatus may further include: the subset acquisition module is used for acquiring a first three-dimensional point subset from the first map data and acquiring a second three-dimensional point subset from the second map data, and the first three-dimensional point subset and the second three-dimensional point subset have a matching relation; the map coordinate transformation module is used for determining coordinate transformation parameters between the first map data and the second map data according to the three-dimensional distribution characteristics of the first three-dimensional point subset and the second three-dimensional point subset; the coordinate transformation parameters are used for placing the first map and the second map in the same coordinate system.
In an exemplary embodiment, the subset acquisition module includes: a two-dimensional-two-dimensional matching point pair obtaining unit for obtaining a two-dimensional-two-dimensional matching point pair between the first image and the second image; a three-dimensional-three-dimensional matching point pair determining unit for determining a three-dimensional-three-dimensional matching point pair between the first three-dimensional point and the second three-dimensional point according to the two-dimensional-two-dimensional matching point pair; and the subset stroke unit is used for forming a first three-dimensional point subset by using a first three-dimensional point in the three-dimensional-three-dimensional matching point pair and forming a second three-dimensional point subset by using a second three-dimensional point in the three-dimensional-three-dimensional matching point pair.
In an exemplary embodiment, the three-dimensional matching point pair determining unit includes: the pose data determining subunit is used for determining pose data of the second image in the first world coordinate system according to the two-dimensional-two-dimensional matching point pairs; the first world coordinate system is a world coordinate system of the first map data; the back projection subunit is used for back projecting the plurality of first three-dimensional points by using the pose data of the second image in the first world coordinate system to obtain three-dimensional-two-dimensional matching point pairs between the plurality of first three-dimensional points and the corresponding two-dimensional points in the second image; and the mapping subunit is used for mapping the two-dimensional point in the three-dimensional-two-dimensional matching point pair into a second three-dimensional point to obtain a three-dimensional-three-dimensional matching point pair.
In an exemplary embodiment, the pose data determination subunit includes: the candidate pose determining subunit is used for mapping the two-dimensional point of the first image in the two-dimensional-two-dimensional matching point pair into a first three-dimensional point and determining a plurality of candidate poses of the second image in a first world coordinate system based on the matching relation between the two-dimensional point of the second image and the first three-dimensional point; and the candidate pose clustering subunit is used for clustering a plurality of candidate poses and determining pose data of the second image in the first world coordinate system according to the optimal class.
In an exemplary embodiment, the three-dimensional matching point pair determining unit includes: and the first three-dimensional point obtaining subunit is used for obtaining a first three-dimensional point corresponding to the two-dimensional point in the first image associated with the optimal class so as to obtain a plurality of first three-dimensional points for back projection.
In an exemplary embodiment, the two-dimensional to two-dimensional matching point pair obtaining unit includes: and the image searching subunit is used for searching at least one first image similar to at least one second image to be matched in the first image aiming at the at least one second image to be matched in the second image, and performing characteristic point matching on the at least one first image and the second image to be matched to obtain a two-dimensional and two-dimensional matching point pair.
In an exemplary embodiment, the coordinate transformation parameters include rotation parameters and translation parameters; an atlas coordinate transformation module comprising: a central point determining unit, configured to determine a first central point of the first three-dimensional point subset and a second central point of the second three-dimensional point subset; the decentralization unit is used for decentralizing the first three-dimensional point subset based on the first central point and decentralizing the second three-dimensional point subset based on the second central point; the rotation parameter determining unit is used for determining a rotation parameter according to the covariance between the first three-dimensional point subset and the second three-dimensional point subset after the decentralization; and the translation parameter determining unit is used for determining a scale relation according to the first three-dimensional point subset and the second three-dimensional point subset after the decentralization, and determining a translation parameter according to the scale relation, the rotation parameter, the first central point and the second central point.
In an exemplary embodiment, the map data processing apparatus may further include: the image pair to be compared determining module is used for acquiring first position data and second position data which have similar relations, and forming an image pair to be compared by a first image corresponding to the first position data and a second image corresponding to the second position data; and the three-dimensional point deleting module is used for responding to the fact that the similarity of the two images in the image pair to be compared is smaller than a preset threshold value, and deleting the image with the later shooting time in the image pair to be compared and the first three-dimensional point or the second three-dimensional point corresponding to the image from the third map data.
The specific details of each part in the above device have been described in detail in the method part embodiments, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which may be implemented in the form of a program product, including program code, for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 2, fig. 5, fig. 6, fig. 7, fig. 8, fig. 15, or fig. 17 may be performed. The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory, a Read Only Memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a portable compact disc read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (18)

1. A map data processing method, comprising:
acquiring first map data of a first scene and second map data of a second scene associated with the first scene; the first map data comprises a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprises a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images;
generating a first map according to the first position data and a first matching relation between the first images; generating a second map according to the second position and posture data and a second matching relation between the second images;
placing the first map and the second map in the same coordinate system, and determining a third matching relationship between the first position and posture data and the second position and posture data according to the position relationship between the first map and the second map;
optimizing at least one of the first position and posture data and the second position and posture data based on the third matching relation, and fusing the first map data and the second map data according to the optimized first position and posture data to obtain third map data.
2. The method according to claim 1, wherein generating a first atlas is based on the first pose data and a first matching relationship between the first images; generating a second atlas according to the second pose data and a second matching relationship between the second images, including:
generating a first map by taking the first pose data as first nodes and taking a first matching relation among the first images as edges among the first nodes; and generating a second map by taking the second posture data as second nodes and taking a second matching relationship between the second images as edges between the second nodes.
3. The method of claim 2, wherein determining a third matching relationship between the first and second pose data according to the positional relationship of the first and second maps comprises:
and traversing the second map to obtain the third matching relationship according to the matching relationship between each second node and the first nodes in the preset range around the second node.
4. The method of claim 3, wherein traversing the second graph comprises:
acquiring an initial matching node pair between the first atlas and the second atlas; the initial matching node pair comprises a first node and a second node;
and traversing the second graph by taking a second node in the initial matching node pair as a root node.
5. The method of claim 4, wherein obtaining initial matching node pairs between the first graph and the second graph comprises:
determining a second reference image in the second image, and searching a first reference image with the highest matching degree with the second reference image in the first image;
and forming the initial matching node pair by the first node corresponding to the first reference image and the second node corresponding to the second reference image.
6. The method of claim 2, wherein optimizing at least one of the first and second pose data based on the third matching relationship comprises:
determining a first attitude transformation parameter corresponding to the third matching relationship according to the first attitude data corresponding to the first node and the second attitude data corresponding to the second node in the third matching relationship;
determining a second pose transformation parameter corresponding to the third matching relationship according to an edge between the first node and the second node in the third matching relationship;
optimizing at least one of the first and second pose data based on a difference in the first and second pose transformation parameters.
7. The method of claim 6, further comprising:
determining the confidence of the third matching relationship according to the number of matching inner points between the first image corresponding to the first node and the second image corresponding to the second node in the third matching relationship;
the optimizing at least one of the first and second pose data based on the difference of the first and second pose transformation parameters comprises:
optimizing at least one of the first and second pose data based on a difference of the first and second pose transformation parameters and a confidence of the third matching relationship.
8. The method of claim 1, further comprising:
acquiring a first three-dimensional point subset from the first map data, and acquiring a second three-dimensional point subset from the second map data, wherein the first three-dimensional point subset and the second three-dimensional point subset have a matching relationship;
determining coordinate transformation parameters between the first map data and the second map data according to the three-dimensional distribution characteristics of the first three-dimensional point subset and the second three-dimensional point subset; the coordinate transformation parameters are used for placing the first map and the second map in the same coordinate system.
9. The method of claim 8, wherein obtaining a first subset of three-dimensional points from the first map data and a second subset of three-dimensional points from the second map data comprises:
acquiring a two-dimensional-two-dimensional matching point pair between the first image and the second image;
determining a three-dimensional-three-dimensional matching point pair between the first three-dimensional point and the second three-dimensional point according to the two-dimensional-two-dimensional matching point pair;
forming the first subset of three-dimensional points with a first three-dimensional point of the pair of three-dimensional-three-dimensional matching points and the second subset of three-dimensional points with a second three-dimensional point of the pair of three-dimensional-three-dimensional matching points.
10. The method of claim 9, wherein determining a three-dimensional-three-dimensional matching point pair between a first three-dimensional point and a second three-dimensional point according to the two-dimensional-two-dimensional matching point pair comprises:
determining pose data of the second image in a first world coordinate system according to the two-dimensional-two-dimensional matching point pairs; the first world coordinate system is a world coordinate system of the first map data;
performing back projection on the plurality of first three-dimensional points by using pose data of the second image in a first world coordinate system to obtain three-dimensional and two-dimensional matching point pairs between the plurality of first three-dimensional points and corresponding two-dimensional points in the second image;
and mapping the two-dimensional point in the three-dimensional-two-dimensional matching point pair to be a second three-dimensional point to obtain the three-dimensional-three-dimensional matching point pair.
11. The method according to claim 10, wherein the determining pose data of the second image in a first world coordinate system according to the two-dimensional matching point pairs comprises:
mapping two-dimensional points of a first image in the two-dimensional-two-dimensional matching point pair into first three-dimensional points, and determining a plurality of candidate poses of the second image in the first world coordinate system based on the matching relation between the two-dimensional points of the second image and the first three-dimensional points;
clustering the candidate poses, and determining pose data of the second image in the first world coordinate system according to the optimal class.
12. The method of claim 11, wherein determining a three-dimensional-three-dimensional matching point pair between a first three-dimensional point and a second three-dimensional point from the two-dimensional-two-dimensional matching point pair further comprises:
and acquiring a first three-dimensional point corresponding to a two-dimensional point in the first image associated with the optimal class to obtain the plurality of first three-dimensional points for back projection.
13. The method of claim 9, wherein the obtaining a two-dimensional matching point pair between the first image and the second image comprises:
and aiming at least one second image to be matched in the second images, searching at least one first image similar to the first image in the first image, and performing characteristic point matching on the at least one first image and the second image to be matched to obtain the two-dimensional and two-dimensional matching point pair.
14. The method of claim 8, wherein the coordinate transformation parameters include rotation parameters and translation parameters; the determining the coordinate transformation parameter between the first map data and the second map data according to the three-dimensional distribution characteristics of the first three-dimensional point subset and the second three-dimensional point subset includes:
determining a first center point of the first subset of three-dimensional points and a second center point of the second subset of three-dimensional points;
performing decentralization on the first three-dimensional point subset based on the first central point, and performing decentralization on the second three-dimensional point subset based on the second central point;
determining the rotation parameters according to the covariance between the first three-dimensional point subset and the second three-dimensional point subset after the decentralization;
determining a scale relationship according to the first three-dimensional point subset and the second three-dimensional point subset after the decentralization, and determining the translation parameter according to the scale relationship, the rotation parameter, the first central point and the second central point.
15. The method of claim 1, further comprising:
acquiring first position data and second position data which have similar relations, and forming an image pair to be compared by a first image corresponding to the first position data and a second image corresponding to the second position data;
and in response to that the similarity of the two images in the image pair to be compared is smaller than a preset threshold value, deleting the image with the later shooting time in the image pair to be compared and the first three-dimensional point or the second three-dimensional point corresponding to the image from the third map data.
16. A map data processing apparatus, characterized by comprising:
the map acquisition module is used for acquiring first map data of a first scene and second map data of a second scene related to the first scene; the first map data comprises a set of first three-dimensional points, a plurality of first images and first posture data corresponding to the first images, and the second map data comprises a set of second three-dimensional points, a plurality of second images and second posture data corresponding to the second images;
the map generation module is used for generating a first map according to the first posture data and a first matching relation between the first images; generating a second map according to the second position and posture data and a second matching relation between the second images;
the relationship determination module is used for placing the first map and the second map in the same coordinate system and determining a third matching relationship between the first position and posture data and the second position and posture data according to the position relationship between the first map and the second map;
and the pose optimization module is used for optimizing at least one of the first pose data and the second pose data based on the third matching relationship, and fusing the first map data and the second map data according to the optimized first pose data and the optimized second pose data to obtain third map data.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 15.
18. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 15 via execution of the executable instructions.
CN202111520245.3A 2021-12-13 2021-12-13 Map data processing method and device, storage medium and electronic equipment Pending CN114241039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111520245.3A CN114241039A (en) 2021-12-13 2021-12-13 Map data processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111520245.3A CN114241039A (en) 2021-12-13 2021-12-13 Map data processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114241039A true CN114241039A (en) 2022-03-25

Family

ID=80755319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111520245.3A Pending CN114241039A (en) 2021-12-13 2021-12-13 Map data processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114241039A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439536A (en) * 2022-08-18 2022-12-06 北京百度网讯科技有限公司 Visual map updating method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439536A (en) * 2022-08-18 2022-12-06 北京百度网讯科技有限公司 Visual map updating method and device and electronic equipment
CN115439536B (en) * 2022-08-18 2023-09-26 北京百度网讯科技有限公司 Visual map updating method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112269851B (en) Map data updating method and device, storage medium and electronic equipment
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN112270710B (en) Pose determining method, pose determining device, storage medium and electronic equipment
CN112270755B (en) Three-dimensional scene construction method and device, storage medium and electronic equipment
US20240029297A1 (en) Visual positioning method, storage medium and electronic device
US20210274358A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN109063549B (en) High-resolution aerial video moving target detection method based on deep neural network
CN112288816B (en) Pose optimization method, pose optimization device, storage medium and electronic equipment
CN113936085B (en) Three-dimensional reconstruction method and device
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
CN113313832B (en) Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
CN113436270A (en) Sensor calibration method and device, electronic equipment and storage medium
CN112927363A (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN111784776A (en) Visual positioning method and device, computer readable medium and electronic equipment
CN112116655A (en) Method and device for determining position information of image of target object
CN111832579A (en) Map interest point data processing method and device, electronic equipment and readable medium
CN114241039A (en) Map data processing method and device, storage medium and electronic equipment
CN113269823A (en) Depth data acquisition method and device, storage medium and electronic equipment
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN114556425A (en) Positioning method, positioning device, unmanned aerial vehicle and storage medium
CN112598732A (en) Target equipment positioning method, map construction method and device, medium and equipment
CN114944015A (en) Image processing method and device, electronic equipment and storage medium
CN113537194A (en) Illumination estimation method, illumination estimation device, storage medium, and electronic apparatus
CN112381828B (en) Positioning method, device, medium and equipment based on semantic and depth information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination