CN112288817B - Three-dimensional reconstruction processing method and device based on image - Google Patents

Three-dimensional reconstruction processing method and device based on image Download PDF

Info

Publication number
CN112288817B
CN112288817B CN202011293208.9A CN202011293208A CN112288817B CN 112288817 B CN112288817 B CN 112288817B CN 202011293208 A CN202011293208 A CN 202011293208A CN 112288817 B CN112288817 B CN 112288817B
Authority
CN
China
Prior art keywords
image
matching
images
subsequent
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011293208.9A
Other languages
Chinese (zh)
Other versions
CN112288817A (en
Inventor
宁海宽
李姬俊男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011293208.9A priority Critical patent/CN112288817B/en
Publication of CN112288817A publication Critical patent/CN112288817A/en
Application granted granted Critical
Publication of CN112288817B publication Critical patent/CN112288817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates in particular to an image-based three-dimensional reconstruction method and device, a computer-readable medium and an electronic device. The method comprises the following steps: acquiring an image sequence and positioning information; selecting successive multi-frame subsequent images of the first image in the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair and establishing a matching relationship between the first image and the subsequent images; screening a second image set matched with the first image position in the image sequence by utilizing positioning information, performing feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair and establishing a matching relationship between the first image and the second image; constructing a global matching relationship based on the image matching relationship of the first image matching pair corresponding to the first image and the second image matching pair; and reconstructing the image sequence in three dimensions based on the global matching relation.

Description

Three-dimensional reconstruction processing method and device based on image
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image-based three-dimensional reconstruction method, an image-based three-dimensional reconstruction apparatus, a computer-readable medium, and an electronic device.
Background
In the field of computer vision, three-dimensional reconstruction is an important point of investigation. Generally, three-dimensional reconstruction algorithms commonly used in the art include: SFM (Structure from motion) techniques, dynamic Fusion algorithms, bundle Fusion algorithms, and the like. In the related art, a mode of matching and screening based on image features is mostly adopted, and robustness of descriptors depending on the image features is compared. However, for locally similar textures, image features do not effectively address such scene reconstruction. For example, when the same sign or logo is present in different places, a false match may occur; that is, pictures that do not belong to the same place are correlated with each other, which may seriously decrease the accuracy of three-dimensional reconstruction, even leading to failure in construction of the map.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image-based three-dimensional reconstruction method, an image-based three-dimensional reconstruction device, a computer-readable medium, and an electronic apparatus, which can effectively avoid the situation of mismatching of images, and improve the accuracy of reconstructing a map.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image-based three-dimensional reconstruction method, comprising:
Acquiring an image sequence and positioning information of each image;
Selecting a first image and a plurality of continuous frames of subsequent images behind the first image from the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair, and establishing a matching relationship between the first image and the subsequent images; and
Screening a second image set matched with the first image position in the image sequence by utilizing the positioning information, performing feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair, and establishing a matching relationship between the first image and the second image;
Constructing a global matching relationship based on the image matching relationship of the first image matching pair corresponding to the first image and the second image matching pair; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
According to a second aspect of the present disclosure, there is provided an image-based three-dimensional reconstruction apparatus including:
the data acquisition module is used for acquiring an image sequence and acquiring positioning information of each image;
The first image matching module is used for selecting a first image and continuous multi-frame subsequent images of the first image in the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair and establishing a matching relationship between the first image and the subsequent images; and
The second image matching module is used for screening a second image set matched with the first image position in the image sequence by utilizing the positioning information, performing feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair and establishing a matching relationship between the first image and the second image;
The reconstruction module is used for constructing a global matching relationship based on the image matching relationship of the first image matching pair and the second image matching pair corresponding to the first image; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the above-described image-based three-dimensional reconstruction method.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
One or more processors;
And a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image-based three-dimensional reconstruction method described above.
The three-dimensional reconstruction method based on the image provided by the embodiment of the disclosure comprises the steps of firstly, collecting an image sequence and marking positioning information of each image when the image is collected; screening matched images by utilizing continuous multi-frame subsequent images behind each image and establishing a matching relationship; after the matching relation is established, the positioning information corresponding to each image is used for screening the images with the matched positions again, and the characteristic matching relation between the images with the matched positions is calculated, so that loop detection can be realized according to the position information; the non-serialization matching of the images of the accessories at the same spatial position is realized, so that the error loop matching is greatly reduced, and the image building precision and the robustness are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 schematically illustrates a flow diagram of an image-based three-dimensional reconstruction method in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of an image acquisition method in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram of an image matching method in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a composition diagram of an image-based three-dimensional reconstruction apparatus in an exemplary embodiment of the present disclosure;
fig. 5 schematically illustrates a structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the related art of computer vision, SFM techniques are generally used to restore the spatial structure of a three-dimensional environment. Conventional SFM algorithms generally include: the three-dimensional reconstruction is realized by sequentially performing the steps of feature extraction and matching, calculating initial matching pairs, point cloud, beam adjustment, repeatedly adding new image frame data, carrying out beam adjustment by combining a certain strategy, and the like. There are two main methods common to serialized image input in the feature matching link: one is a serialization match and the other is a global violent match. Wherein the serialized matching can not form a loop under a large scale environment due to overlarge accumulated error. In order to ensure that the final map is consistent with the actual map in a large-scale environment, violent search is often used for carrying out feature matching calculation on all images so as to increase a plurality of loop image pairs and reduce accumulated errors. But this way it is computationally expensive and time consuming. Based on this, existing methods typically also perform matching screening based on image features, compare the robustness of dependent image feature descriptors, and for locally similar textures, image features are not effective in solving such scene reconstruction. When the same signboards or logo and the like appear in different places, wrong matching can occur, namely pictures which do not belong to the same place are associated with each other, which can seriously reduce the accuracy of three-dimensional reconstruction and even cause drawing construction failure.
In view of the foregoing drawbacks and deficiencies of the prior art, an image-based three-dimensional reconstruction method is provided in the present exemplary embodiment. Referring to fig. 1, the image-based three-dimensional reconstruction method described above may include the steps of:
s11, acquiring an image sequence and acquiring positioning information of each image;
s12, selecting a first image and continuous multi-frame subsequent images of the first image in the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair and establishing a matching relationship between the first image and the subsequent images; and
S13, screening a second image set matched with the first image position in the image sequence by using the positioning information, performing feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair, and establishing a matching relationship between the first image and the second image;
s14, constructing a global matching relationship based on the image matching relationship of the first image matching pair and the second image matching pair corresponding to the first image; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
In the three-dimensional reconstruction method provided by the present exemplary embodiment, on one hand, sequential matching is performed, that is, a continuous multi-frame subsequent image after each image is utilized to perform screening of matching images, and a matching relationship is established; on the other hand, after the matching relation is established, the positioning information corresponding to each image is used for screening the images with the matched positions again, and the characteristic matching relation between the images with the matched positions is calculated, so that loop detection can be realized according to the position information; the loop detection based on the image features in the traditional sense is avoided, and the non-serialization matching is only carried out on the images near the same spatial position, namely the loop detection; and further, error loop matching is greatly reduced, and the graph building precision and robustness are improved. Especially for scenes with unreliable image characteristics (including but not limited to repeated textures, illumination changes and weak textures), the accuracy of reconstructing the map can be greatly improved.
Hereinafter, each step of the image-based three-dimensional reconstruction method in the present exemplary embodiment will be described in more detail with reference to the accompanying drawings and examples.
In step S11, a sequence of images is acquired, and positioning information for each image is acquired.
In this example embodiment, the method described above may be applied to a terminal device with a shooting function, for example, an intelligent terminal device such as a mobile phone configured with a rear camera, a tablet computer, and the like; for example, the method can be applied to the acquisition of images and indoor positioning information of indoor environments. Specifically, referring to fig. 2, the step S11 may include:
Step S111, in response to an image acquisition instruction, activating a monocular camera to acquire RGB images; and
Step S112, invoking an ultra-wideband driving program to acquire the position information when the RGB image is acquired, and configuring the position information as the positioning information of the RGB image.
For example, the user may trigger an instruction to collect an image in an interactive interface of the application on the terminal device. After the application program obtains the image acquisition instruction of the user, the monocular camera can be called and activated through the instruction port, so that the monocular camera acquires and shoots the RGB image of the current indoor environment or the current indoor scene according to a certain rule, and a corresponding image sequence is obtained. For example, the rule of photographing may be a moving speed of the terminal device, a frequency of image photographing, an angle of photographing, a specific photographing parameter, and the like.
In addition, when the monocular camera is called to collect RGB images, an Ultra Wide Band (UWB) driving program can be synchronously called according to an image collection instruction, and positioning information of UWB can be synchronously collected when images are collected. For example, a UWB chip may be mounted in the terminal device; the user can pre-configure the UWB driver to acquire the positioning information with the same frequency as the image acquisition frequency; or the frequency of the positioning information acquisition can be configured according to the requirement of the UWB driver. Specifically, for the collected positioning information, the corresponding positioning information collection time may be marked.
In step S12, a first image and a plurality of consecutive frames of subsequent images subsequent to the first image are selected from the image sequence, and feature point matching is performed on the first image and each of the subsequent images to screen the subsequent images matched with the first image, so as to generate a first image matching pair and establish a matching relationship between the first image and the subsequent images.
In this example embodiment, after the acquisition of the image and the positioning information is completed, the image sequence may be processed in an offline manner. Specifically, for an image sequence, each frame image may be sequentially selected as the first image in order; meanwhile, successive k frame images after the first image are selected as subsequent images of the first image. Where k is a positive integer, and may be configured to have a value of 5, 7, 8, 9 or 10, 11, for example. The specific numerical values of k are not particularly limited in this disclosure. Feature matching may then be performed on the first image and the corresponding subsequent image set. The matching relation among the images in the image sequence is calculated in a serialization mode.
Specifically, in step S12, the feature point matching is performed on the first image and the subsequent image to screen the subsequent image matched with the first image, and a matching relationship between the first image and the subsequent image is established, which may specifically include, as shown in fig. 3:
step S121, extracting features of the first image and the subsequent image to obtain two-dimensional feature points and feature descriptors corresponding to the two-dimensional feature points;
Step S122, calculating the distance between the feature points of the first image and the subsequent image by using the feature descriptors, and establishing feature point matching pairs between the first image and the subsequent image by using the feature points with the distance smaller than a preset threshold value;
step S123, constructing a corresponding basic matrix based on the first image and the subsequent image, and screening the feature point matching pairs by using a random sampling consistency algorithm based on the basic matrix;
Step S124, if the number of the feature point matches after screening is greater than a preset threshold, judging that the first image matches with the subsequent image and establishing a matching relationship between the first image and the subsequent image.
In particular, for a sequence of images, a serialization match may be performed starting with the first frame image. For example, a first frame image in the image sequence is initially taken as a first image, and then a subsequent continuous 6 frame image is selected as a subsequent image. For the selected first image and the multi-frame subsequent images, the matching relation between the first image and each frame subsequent image can be calculated respectively.
Specifically, feature point extraction may be performed on the first image and each subsequent image to calculate corresponding two-dimensional feature points and feature descriptor information thereof. Calculating the distance of each characteristic point between the two images by utilizing the characteristic descriptor information; if the distance before the characteristic points is smaller than the preset threshold value, it can be determined that the two characteristic points form a pair of characteristic point matching pairs. Thus, a plurality of pairs of two-dimensional feature point matching pairs can be obtained between every two pictures.
After all the feature point matching pairs between the two images are obtained, the feature point matching pairs between the two images are screened by using a RANSAC algorithm (Random Sample Consensus, random sampling consistency algorithm), and the feature point matching pairs which are mismatched are deleted. Specifically, a base matrix may be first constructed using feature point information of two images, and RANSAC filtering may be performed on matching point pairs between the images using the base matrix. If the number of the feature point matching point pairs between the two screened images is still larger than a preset threshold value, judging that the two images are matched to form an image matching pair; and storing the matching relation of the two images in a database, and recording the screened characteristic point matching pairs. The random sampling consistency algorithm can finish screening the feature point matching by adopting a conventional method, and the specific process is not repeated.
The image sequence is traversed in the mode, the first matching of each image in the image sequence is completed, and the image matching result and the matching relation corresponding to each image are obtained.
Or in some exemplary embodiments of the present disclosure, feature extraction may be performed on all images in the image sequence after image acquisition is completed, and two-dimensional feature points and corresponding feature descriptors of each image are calculated and stored in a preset database. For example, feature extraction may be performed using the SIFT algorithm (Scale-INVARIANT FEATURES TRANSFORM, scale-invariant feature transform).
In step S13, a second image set matching the first image position is screened in the image sequence by using the positioning information, feature point matching is performed on each image in the first image and the second image set to screen a second image matching the first image, a second image matching pair is generated, and a matching relationship between the first image and the second image is established
In this exemplary embodiment, the image sequence may be matched with the feature information, and the image may be matched with the positional relationship for the second time. For example, the positioning information may be used to match between images in a sequence of images. Specifically, for the first image, the corresponding second image set may be screened by using the positioning information at the same time of performing the first matching by using the feature information or after the first matching is completed.
In this example embodiment, specifically, screening the second image set matching the first image position in the image sequence using the positioning information may include:
Step S131, respectively calculating the distance between the positioning information of the first image and the positioning information of other images in the image sequence;
Step S132, if the distance is smaller than a preset distance threshold, adding the corresponding image to the second image set.
Specifically, since the pictures are acquired at the same position, there is a matching relationship even if acquired at different times throughout the sampling time. For the selected first image, the corresponding positioning information P can be read from the database, and meanwhile, the positioning information Q of other images in the image sequence is read, and the euclidean distance between the position Q of the first image and the position Q between the images is calculated respectively.
If the distance between two frames of images is smaller than the preset distance threshold, the two frames of images can be considered to be collected at the same position, an image matching pair can be formed, and the images are added into a second image set corresponding to the first image. If the distance between the two frames of images is large, the corresponding matching relationship is considered to be absent. And repeating the steps, traversing the image sequence, and obtaining a second image set corresponding to each image.
In this exemplary embodiment, after the second image set corresponding to each image is obtained, the above-mentioned method from step S121 to step S124 may be used to perform feature matching again on each image in the first image and the second image set by using the feature information, and calculate the corresponding matching relationship; firstly, calculating a characteristic point matching pair between two images, and then screening the characteristic point matching pair to obtain an image pair matching result.
If the matching relationship is established through re-matching, the loop matching is considered to be generated. Thus, the first matching and the second matching are completed.
Or in other exemplary embodiments of the present disclosure, the second matching may also be performed using the preceding image of the first image described above. Specifically, after the generating the first image matching pair and establishing the matching relationship between the first image and the subsequent image, the method further includes:
Step S21, selecting each frame of image of the first image precursor, and screening the precursor images with the distance smaller than a preset distance threshold value between the first image and the first image by utilizing positioning information to construct a precursor image set; performing feature point matching on the first image and each precursor image in the precursor image set to screen the precursor images matched with the first image, generating a third image matching pair and establishing a matching relationship between the first image and the precursor images;
Step S22, traversing the image sequence to obtain the matching relation of the front image and/or the rear image corresponding to each image so as to construct a global matching relation; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
Specifically, for a first image selected in the image sequence, each frame-leading image of the sequence preceding the first image may be selected; and then calculating the Euclidean distance between the first image and the corresponding frame preamble image by using the position information. At this time, the first frame image in the image sequence is excluded.
Specifically, for each frame of image and all corresponding preamble images, the euclidean distance between the image position information can be calculated by using the positioning information, and the calculation result of the euclidean distance and the preset distance threshold value are used for first screening, so as to obtain a third image matching pair with a possible matching relationship. Then, the feature information of each image is used to perform second screening on the primarily matched third image matching pair, as in the method of step S121-step S124, feature matching is performed again on each image in the third image matching pair by using the feature information, and a corresponding matching relationship is calculated; firstly, calculating a characteristic point matching pair between two images, and then screening the characteristic point matching pair to obtain an image pair matching result. If the matching is successful and a matching relationship is established, the loop matching is considered to be generated. Thus, the first matching and the second matching are completed.
For each frame of image in the image sequence, the continuous multi-frame image of the sequence behind each frame of image is utilized to carry out first matching, and each frame of image of the sequence in front of each frame of image is utilized to carry out second matching, so that the resource consumption in the image matching process can be effectively reduced, the accuracy of the matching relationship is ensured to the greatest extent, and the effectiveness of loop matching is ensured.
In step S14, a global matching relationship is constructed based on the image matching relationship of the first image matching pair and the second image matching pair corresponding to the first image; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
In this exemplary embodiment, the image sequence may be traversed by the methods of steps S12 and S13, so as to obtain the matching relationship corresponding to each frame of image. Based on the matching relations, a global matching relation of the image sequence can be constructed, and three-dimensional reconstruction is performed by utilizing the global matching relation and the image sequence. For example, a SFM (Structure from motion) algorithm may be used for three-dimensional reconstruction. Generally, when the SFM algorithm is used for three-dimensional reconstruction, the input of the SFM algorithm can be a two-dimensional image sequence; various parameters of the camera can be deduced through the global matching relation. For example, the process of the SFM algorithm may include: firstly, focal length information is extracted from a picture (needed by initializing BA later), then, feature extraction algorithms such as SIFT are utilized to extract image features, and a kd-tree model is utilized to calculate Euclidean distance between feature points of two pictures to match the feature points, so that image pairs with the number of feature point matching meeting the requirement are found. For each image matching pair, epipolar geometry is calculated, F matrix is estimated and matching pairs are improved by ransac algorithm optimization. In this way, if a feature point can be transferred in a chain in such a matching pair, and is detected at all times, a trace can be formed. Then enter the structure-from-motion section, the key first step is to select a good image pair to initialize the entire BA process. Firstly, performing first BA on two pictures selected by initialization, then circularly adding new pictures to perform new BA, and finally ending the BA until no suitable pictures which can be continuously added exist. And obtaining camera estimation parameters and scene geometric information, namely sparse 3D point cloud. The bundle adjustment between two pictures uses a sparse beam adjustment method sba software package, and the sparse beam adjustment method is a nonlinear least square optimization objective function algorithm.
Of course, in other exemplary embodiments of the present disclosure, after the global matching relationship is constructed, three-dimensional reconstruction may also be performed using other algorithms. For example, DEEP LEARNING-based depth estimation and structural reconstruction algorithms, and the like.
Based on the foregoing, in other exemplary embodiments of the present disclosure, when the acquiring the image sequence, the method may further include:
Step S31, analyzing the matched characteristic points between the images to obtain attitude information and three-dimensional coordinates of the characteristic points;
and step S32, carrying out posture correction on the camera based on the matched posture information corresponding to the image and the three-dimensional coordinates of the feature points.
Specifically, after the global matching relationship is constructed, or in the acquisition process of the image sequence; feature point matching can be performed between any two adjacent images in the image sequence, and the matched feature points are analyzed, so that position and posture information of the camera can be obtained through calculation. And solving the position coordinates of the two-dimensional feature points in the image in the three-dimensional space. Based on the information, correction information of the camera posture can be generated, so that before shooting of the subsequent image, the camera posture is corrected in real time, and further, the consistency of the posture information of the subsequent shot image is ensured.
Based on the foregoing, in other exemplary embodiments of the present disclosure, after building the global matching relationship, the method may further include: and verifying the global matching relationship according to the acquisition time of each image so as to delete the wrong matching relationship.
Specifically, when RGB image acquisition is performed by using a monocular camera, the acquisition time of each frame of image may be marked. After the global matching relationship is obtained, for each image matching pair, the image matching pair can be checked again according to the image acquisition time, and if the image acquisition time is close, whether the positioning information is close can be judged again. Therefore, the image matching pair in the image global matching result can be checked according to the acquisition time and the position information of the image.
The three-dimensional reconstruction method based on the image, provided by the embodiment of the disclosure, can be applied to the three-dimensional reconstruction process in an off-line mode. Applied to indoor and outdoor positioning navigation solutions, such as AR navigation and the like. For scenes with unreliable image features, such as scenes with more repeated textures, illumination changes and weak textures, the map precision can be greatly improved, and meanwhile, as the method of the scheme completely combines the positioning information of the images in the second matching process, the situation of image mismatching is avoided, and the success rate of map building can be greatly increased. And because the loop matching images are determined in the data acquisition stage, global violent matching during image matching retrieval is avoided, and the operation time can be greatly saved. Compared with the traditional method, the method has the advantages that other sensors are fully utilized to acquire general global position information to directly determine the position of the loop frame, a more accurate global image communication diagram is obtained, and good input is provided for the follow-up steps of three-dimensional reconstruction. The scheme is simple and convenient to operate, and can be used as a supplement to the existing three-dimensional reconstruction scheme and combined with other loop screen sections. Has larger use advantages compared with other cases.
It is noted that the above-described figures are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Further, referring to fig. 4, in this exemplary embodiment, there is further provided an image-based three-dimensional reconstruction apparatus 40, including: a data acquisition module 401, a first image matching module 402, a second image matching module 403 and a reconstruction module 404. Wherein,
The data acquisition module 401 may be configured to acquire a sequence of images and to acquire positioning information for each image.
The first image matching module 402 may be configured to select a first image and a continuous multi-frame subsequent image subsequent to the first image from the image sequence, perform feature point matching on the first image and each subsequent image to screen the subsequent image matched with the first image, generate a first image matching pair, and establish a matching relationship between the first image and the subsequent image.
The second image matching module 403 may be configured to screen a second image set matching the first image position in the image sequence using the positioning information, perform feature point matching on each image in the first image and the second image set to screen a second image matching the first image, generate a second image matching pair, and establish a matching relationship between the first image and the second image.
The reconstructing module 404 may be configured to construct a global matching relationship based on the image matching relationship of the first image matching pair and the second image matching pair corresponding to the first image; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
In one example of the present disclosure, the data acquisition module 401 may include: an image acquisition unit and a positioning information acquisition unit (not shown in the figure). Wherein,
The image acquisition unit can be used for responding to the image acquisition instruction and activating the monocular camera to acquire RGB images.
The positioning information acquisition unit may be used to call an ultra wideband driving program to acquire the position information when the RGB image is acquired, and configure the position information as the positioning information of the RGB image
In one example of the present disclosure, the first image matching module 402 may be further configured to perform feature extraction on the first image and the subsequent image to obtain two-dimensional feature points, and feature descriptors corresponding to the two-dimensional feature points; calculating the distance between the feature points of the first image and the subsequent image by using the feature descriptors, and establishing feature point matching pairs between the first image and the subsequent image by using the feature points with the distance smaller than a preset threshold value; constructing a corresponding basic matrix based on the first image and the subsequent image, and screening the feature point matching pairs by utilizing a random sampling consistency algorithm based on the basic matrix; and if the number of the feature point matches after screening is larger than a preset threshold, judging that the first image is matched with the subsequent image and establishing a matching relationship between the first image and the subsequent image.
In one example of the present disclosure, the second image matching module 403 may be further configured to calculate a distance between the positioning information of the first image and the positioning information of other images in the image sequence, respectively; and if the distance is smaller than a preset distance threshold value, adding the corresponding image to the second image set.
In one example of the present disclosure, the apparatus 40 may further include: a third matching module (not shown).
The third matching module may be configured to select each frame of image of the first image preamble after generating a first image matching pair and establishing a matching relationship between the first image and the subsequent image, and screen, using positioning information, a preamble image with a distance between the first image and the preamble image being less than a preset distance threshold to construct a preamble image set; performing feature point matching on the first image and each precursor image in the precursor image set to screen the precursor images matched with the first image, generating a third image matching pair and establishing a matching relationship between the first image and the precursor images; traversing the image sequence to obtain the matching relation of the preceding images and/or the subsequent images corresponding to the images so as to construct a global matching relation; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
In one example of the present disclosure, the apparatus 40 further includes: a first verification module (not shown).
The first verification module can be used for analyzing the matched characteristic points between the images to acquire the attitude information and the three-dimensional coordinates of the characteristic points when the images are acquired or after the global matching relationship is constructed; and correcting the posture of the camera based on the matched posture information corresponding to the image and the three-dimensional coordinates of the feature points.
In one example of the present disclosure, the apparatus 40 may further include: a second checking module (not shown in the figures).
The second verification module may be configured to verify the global matching relationship according to the acquisition time of each image after the global matching relationship is constructed, so as to delete the erroneous matching relationship.
The specific details of each module in the above-mentioned image-based three-dimensional reconstruction device are already described in detail in the corresponding image-based three-dimensional reconstruction method, so that they will not be described in detail herein.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Fig. 5 shows a schematic diagram of an electronic device suitable for use in implementing embodiments of the invention.
It should be noted that the electronic device 500 shown in fig. 5 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 5, the electronic apparatus 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 502 or a program loaded from a storage portion 508 into a random access Memory (Random Access Memory, RAM) 503. In the RAM 503, various programs and data required for the system operation are also stored. The CPU 501, ROM502, and RAM 503 are connected to each other through a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, according to embodiments of the present application, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. When executed by a Central Processing Unit (CPU) 501, performs the various functions defined in the system of the present application.
Specifically, the electronic device may be an intelligent mobile terminal device such as a mobile phone, a tablet computer or a notebook computer. Or the electronic device may be an intelligent terminal device such as a desktop computer.
It should be noted that, the computer readable medium shown in the embodiments of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
It should be noted that, as another aspect, the present application also provides a computer-readable medium, which may be included in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image-based three-dimensional reconstruction method, comprising:
Acquiring an image sequence and positioning information of each image;
Selecting a first image and a plurality of continuous frames of subsequent images behind the first image from the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair, and establishing a matching relationship between the first image and the subsequent images; and
Screening a second image set matched with the first image position in the image sequence by utilizing the positioning information, performing feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair, and establishing a matching relationship between the first image and the second image;
Constructing a global matching relationship based on the image matching relationship of the first image matching pair corresponding to the first image and the second image matching pair; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
2. The image-based three-dimensional reconstruction method according to claim 1, wherein the acquiring the image sequence and acquiring the positioning information of each image comprises:
in response to an image acquisition instruction, activating a monocular camera to acquire RGB images; and
And calling an ultra-wideband driving program to acquire the position information when the RGB image is acquired, and configuring the position information as the positioning information of the RGB image.
3. The method of claim 1, wherein the performing feature point matching on the first image and the subsequent image to filter the subsequent image matched with the first image and establish a matching relationship between the first image and the subsequent image comprises:
extracting features of the first image and the subsequent image to obtain two-dimensional feature points and feature descriptors corresponding to the two-dimensional feature points;
Calculating the distance between the feature points of the first image and the subsequent image by using the feature descriptors, and establishing feature point matching pairs between the first image and the subsequent image by using the feature points with the distance smaller than a preset threshold value;
constructing a corresponding basic matrix based on the first image and the subsequent image, and screening the feature point matching pairs by utilizing a random sampling consistency algorithm based on the basic matrix;
And if the number of the feature point matches after screening is larger than a preset threshold, judging that the first image is matched with the subsequent image and establishing a matching relationship between the first image and the subsequent image.
4. The image-based three-dimensional reconstruction method according to claim 1, wherein the screening the second image set matching the first image position in the image sequence using the positioning information comprises:
respectively calculating the distance between the positioning information of the first image and the positioning information of other images in the image sequence;
And if the distance is smaller than a preset distance threshold value, adding the corresponding image to the second image set.
5. The image-based three-dimensional reconstruction method according to claim 1 or 3, wherein after the generating a first image matching pair and establishing a matching relationship of the first image and the subsequent image, the method further comprises:
Selecting each frame of image of the first image precursor, and screening the precursor images with the distance smaller than a preset distance threshold value from the first image by utilizing positioning information to construct a precursor image set; performing feature point matching on the first image and each precursor image in the precursor image set to screen the precursor images matched with the first image, generating a third image matching pair and establishing a matching relationship between the first image and the precursor images;
Traversing the image sequence to obtain the matching relation of the preceding images and/or the subsequent images corresponding to the images so as to construct a global matching relation; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
6. The image-based three-dimensional reconstruction method according to claim 1, wherein when the sequence of images is acquired, the method further comprises:
Analyzing the matched characteristic points between the images to obtain attitude information and three-dimensional coordinates of the characteristic points;
And correcting the posture of the camera based on the matched posture information corresponding to the image and the three-dimensional coordinates of the feature points.
7. The image-based three-dimensional reconstruction method according to claim 1, wherein after the constructing the global matching relationship, the method further comprises:
and verifying the global matching relationship according to the acquisition time of each image so as to delete the wrong matching relationship.
8. An image-based three-dimensional reconstruction apparatus, comprising:
the data acquisition module is used for acquiring an image sequence and acquiring positioning information of each image;
The first image matching module is used for selecting a first image and continuous multi-frame subsequent images of the first image in the image sequence, performing feature point matching on the first image and each subsequent image to screen the subsequent images matched with the first image, generating a first image matching pair and establishing a matching relationship between the first image and the subsequent images; and
The second image matching module is used for screening a second image set matched with the first image position in the image sequence by utilizing the positioning information, performing feature point matching on each image in the first image and the second image set to screen a second image matched with the first image, generating a second image matching pair and establishing a matching relationship between the first image and the second image;
The reconstruction module is used for constructing a global matching relationship based on the image matching relationship of the first image matching pair and the second image matching pair corresponding to the first image; and carrying out three-dimensional reconstruction on the image sequence based on the global matching relation.
9. A computer readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image-based three-dimensional reconstruction method according to any one of claims 1 to 7.
10. An electronic device, comprising:
One or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the image-based three-dimensional reconstruction method of any one of claims 1 to 7.
CN202011293208.9A 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image Active CN112288817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011293208.9A CN112288817B (en) 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011293208.9A CN112288817B (en) 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image

Publications (2)

Publication Number Publication Date
CN112288817A CN112288817A (en) 2021-01-29
CN112288817B true CN112288817B (en) 2024-05-07

Family

ID=74399693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011293208.9A Active CN112288817B (en) 2020-11-18 2020-11-18 Three-dimensional reconstruction processing method and device based on image

Country Status (1)

Country Link
CN (1) CN112288817B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115333A (en) * 2023-02-27 2023-11-24 荣耀终端有限公司 Three-dimensional reconstruction method combined with IMU data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650178A (en) * 2009-09-09 2010-02-17 中国人民解放军国防科学技术大学 Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
CN110335319A (en) * 2019-06-26 2019-10-15 华中科技大学 Camera positioning and the map reconstruction method and system of a kind of semantics-driven
CN110361005A (en) * 2019-06-26 2019-10-22 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, readable storage medium and electronic equipment
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 Method and device for establishing three-dimensional reconstruction model of space moving target
CN111209978A (en) * 2020-04-20 2020-05-29 浙江欣奕华智能科技有限公司 Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111402413A (en) * 2020-06-04 2020-07-10 浙江欣奕华智能科技有限公司 Three-dimensional visual positioning method and device, computing equipment and storage medium
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 Three-dimensional reconstruction method for large component based on image sequence

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101850027B1 (en) * 2011-12-08 2018-04-24 한국전자통신연구원 Real-time 3-dimension actual environment reconstruction apparatus and method
US20180315232A1 (en) * 2017-05-01 2018-11-01 Lockheed Martin Corporation Real-time incremental 3d reconstruction of sensor data
US10845818B2 (en) * 2018-07-30 2020-11-24 Toyota Research Institute, Inc. System and method for 3D scene reconstruction of agent operation sequences using low-level/high-level reasoning and parametric models

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650178A (en) * 2009-09-09 2010-02-17 中国人民解放军国防科学技术大学 Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN110335319A (en) * 2019-06-26 2019-10-15 华中科技大学 Camera positioning and the map reconstruction method and system of a kind of semantics-driven
CN110361005A (en) * 2019-06-26 2019-10-22 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, readable storage medium and electronic equipment
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 Three-dimensional reconstruction method for large component based on image sequence
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 Method and device for establishing three-dimensional reconstruction model of space moving target
CN111209978A (en) * 2020-04-20 2020-05-29 浙江欣奕华智能科技有限公司 Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111402413A (en) * 2020-06-04 2020-07-10 浙江欣奕华智能科技有限公司 Three-dimensional visual positioning method and device, computing equipment and storage medium

Also Published As

Publication number Publication date
CN112288817A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN110070564B (en) Feature point matching method, device, equipment and storage medium
CN110766716A (en) Method and system for acquiring information of space unknown moving target
US20140198976A1 (en) Method and system for fast dense stereoscopic ranging
WO2021136386A1 (en) Data processing method, terminal, and server
CN111928842B (en) Monocular vision based SLAM positioning method and related device
CN115035235A (en) Three-dimensional reconstruction method and device
CN113674400A (en) Spectrum three-dimensional reconstruction method and system based on repositioning technology and storage medium
CN108492284B (en) Method and apparatus for determining perspective shape of image
EP3998582A1 (en) Three-dimensional model generation method and three-dimensional model generation device
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN114494383B (en) Light field depth estimation method based on Richard-Lucy iteration
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN117274605B (en) Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
WO2024082602A1 (en) End-to-end visual odometry method and apparatus
CN117132737A (en) Three-dimensional building model construction method, system and equipment
CN112907657A (en) Robot repositioning method, device, equipment and storage medium
CN116843754A (en) Visual positioning method and system based on multi-feature fusion
WO2022174603A1 (en) Pose prediction method, pose prediction apparatus, and robot
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
CN115393423A (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant