CN116858215A - AR navigation map generation method and device - Google Patents

AR navigation map generation method and device Download PDF

Info

Publication number
CN116858215A
CN116858215A CN202311132212.0A CN202311132212A CN116858215A CN 116858215 A CN116858215 A CN 116858215A CN 202311132212 A CN202311132212 A CN 202311132212A CN 116858215 A CN116858215 A CN 116858215A
Authority
CN
China
Prior art keywords
target
reconstruction model
dimensional
point cloud
dimensional reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311132212.0A
Other languages
Chinese (zh)
Other versions
CN116858215B (en
Inventor
胡庆武
欧文武
王顺利
艾明耀
王少华
赵鹏程
李加元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202311132212.0A priority Critical patent/CN116858215B/en
Publication of CN116858215A publication Critical patent/CN116858215A/en
Application granted granted Critical
Publication of CN116858215B publication Critical patent/CN116858215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Abstract

The application provides an AR navigation map generation method and device, comprising the following steps: obtaining a panoramic video of a target building, and performing frame extraction processing to obtain an AR navigation image library; acquiring three-dimensional point cloud data of a target building according to a preset number of stations and the set laser radars; constructing a road network according to the walking route in the target building; feature matching is carried out on the projection pictures with fixed visual angles in the AR navigation image library, and three-dimensional reconstruction is carried out on the matching result, so that an initial three-dimensional reconstruction model is obtained; performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model; and generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model. According to the method and the device, the data are positioned through the panoramic video and the three-dimensional point cloud data of the museum, so that the positioning effect is improved. And processing the three-dimensional reconstruction model, the three-dimensional point cloud data and the walking route to generate a target building AR navigation map, so that the three-dimensional map of the museum is displayed.

Description

AR navigation map generation method and device
Technical Field
The application relates to the technical field of AR navigation maps, in particular to an AR navigation map generation method and device.
Background
With the rapid development of computer technology, AR technology is becoming more and more widely used, which is a new technology of integrating real world information and virtual world information in a "seamless" manner, and covering digital images in the real world that people can feel.
In recent years, urban construction rapidly develops, large indoor scenes become more and more, indoor navigation technology also becomes more important, and the commonly used navigation technology is mostly realized based on GPS positioning, and the GPS positioning technology cannot be used indoors because of weaker receivable signals. Currently, the mainstream indoor navigation mainly adopts WIFI positioning, bluetooth positioning and the like, and the positioning modes mainly rely on WIFI or Bluetooth signal positioning, so that the positioning effect is poor, a mapping process is not carried out, and a three-dimensional map cannot be displayed.
Therefore, it is urgently needed to provide an AR navigation map generation method and device, which solve the technical problems that the navigation technology commonly used in the prior art relies on WIFI or bluetooth signal positioning, resulting in poor positioning effect, and no mapping process exists, and a three-dimensional map cannot be displayed.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method and a device for generating an AR navigation map, which are used for solving the technical problems that the navigation technology commonly used in the prior art relies on WIFI or bluetooth signal positioning, resulting in poor positioning effect, and no mapping process exists, so that the three-dimensional map cannot be displayed.
In one aspect, the present application provides a method for generating an AR navigation map, including:
acquiring a panoramic video of a target building, and performing frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
acquiring three-dimensional point cloud data of the target building according to the preset number of stations and the set laser radars; constructing a road network according to the walking route in the target building;
performing feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and performing three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model;
and generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
In some possible implementations, the performing frame extraction processing on the panoramic video to obtain an AR navigation image library includes:
performing frame extraction processing on the panoramic video according to preset requirements to obtain all photos;
screening all the photos to obtain reserved photos;
projecting the reserved photo to obtain a projection photo with a fixed visual angle;
and obtaining the AR navigation image library according to the projection photo with the fixed visual angle.
In some possible implementations, the obtaining three-dimensional point cloud data of the target building according to the preset number of sites and the set lidar includes:
acquiring a preset number of target sites according to the repeatability among the preset number of sites in the panoramic video;
setting laser radar parameters corresponding to each target site according to the position of each target site in the target building;
and acquiring the three-dimensional point cloud data of the target building according to the position corresponding to each target site and the laser radar parameters.
In some possible implementations, the feature matching of the projection photographs of the fixed view angle in the AR navigation image library to obtain a matching result includes:
performing first feature extraction on the projection photo with the fixed visual angle in the AR navigation image library to form an image retrieval feature library;
matching first features corresponding to the projection photos of each fixed view angle in the image retrieval feature library to obtain the similarity between the projection photos of each fixed view angle and the projection photos of other fixed view angles in the image retrieval feature library;
determining the first projection photo with the similarity larger than a first similarity threshold value as an image pair to obtain at least one image pair; each image pair comprises at least two projection photographs of fixed viewing angles for calculating the similarity;
and carrying out feature matching on each image pair to obtain the matching result.
In some possible implementations, the matching result includes at least three two-dimensional point pairs;
and performing feature matching on each image pair to obtain the matching result, wherein the feature matching comprises the following steps:
performing second feature extraction on the first projection photo of each image pair to obtain a feature descriptor corresponding to each first projection photo;
and carrying out feature matching on the feature descriptors of the corresponding first projection photos of each image pair to obtain at least three two-dimensional point pairs of each image pair.
In some possible implementations, the initial three-dimensional reconstruction model includes a sparse point cloud;
the step of performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model comprises the following steps:
obtaining initial coordinates of the sparse point cloud in the first projection photo according to the initial three-dimensional reconstruction model;
and carrying out change processing on the initial three-dimensional reconstruction model according to the initial coordinates and the three-dimensional point cloud data to obtain the target three-dimensional reconstruction model.
In some possible implementations, the performing a change process on the initial three-dimensional reconstruction model according to the initial coordinates and the three-dimensional point cloud data to obtain the target three-dimensional reconstruction model includes:
determining at least three control points in the projection photo of the fixed visual angle of the AR navigation image library according to the three-dimensional point cloud data; the three-dimensional point cloud data comprise three-dimensional point cloud coordinates corresponding to each three-dimensional point cloud data;
calculating the three-dimensional point cloud coordinates of each three-dimensional point cloud data according to the at least three control points to obtain real coordinates corresponding to each three-dimensional point cloud data;
and performing scale change processing on the initial coordinates of the initial three-dimensional reconstruction model according to the real coordinates to obtain the target three-dimensional reconstruction model.
In some possible implementations, the generating the target building AR navigation map according to the road network and the target three-dimensional reconstruction model includes:
acquiring the pictures of the preset number of walking routes of the road network and the pictures of the preset number of collection exhibited by the target building;
and importing the preset number of walking route photos and the preset number of collection photos into the target three-dimensional reconstruction model to carry out incremental reconstruction, and generating the target building AR navigation map.
In some possible implementations, the importing the preset number of walking route photos and the preset number of collection photos into the target three-dimensional reconstruction model to perform incremental reconstruction, and generating the target building AR navigation map includes:
importing the preset number of walking route photos into the target three-dimensional reconstruction model for incremental reconstruction to obtain walking route coordinates corresponding to each walking route photo;
connecting the coordinates of the walking route on each route in the road network to obtain all real routes of the target building;
importing the preset number of collection photos into the target three-dimensional reconstruction model to carry out incremental reconstruction, so as to obtain collection position information corresponding to each collection photo;
and generating the target building AR navigation map according to the real lines and the collection position information.
On the other hand, the application also provides an AR navigation map generating device, which comprises:
the image library acquisition module is used for acquiring a panoramic video of a target building, and performing frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
the point cloud data acquisition module is used for acquiring three-dimensional point cloud data of the target building according to the preset number of sites and the set laser radars; constructing a road network according to the walking route in the target building;
the feature matching module is used for carrying out feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and carrying out three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
the scale change module is used for carrying out scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model;
and the map determining module is used for generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
The beneficial effects of adopting the embodiment are as follows: according to the AR navigation map generation method provided by the application, panoramic video of a target building is obtained, and frame extraction processing is carried out on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles; acquiring three-dimensional point cloud data of the target building according to the preset number of stations and the set laser radars; constructing a road network according to the walking route in the target building; performing feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and performing three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model; performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model; and generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model. According to the method and the device, the data can be positioned through the panoramic video and the three-dimensional point cloud data of the museum, and the positioning effect is improved. Furthermore, the target building AR navigation map can be generated by processing the panoramic video of the three-dimensional reconstruction model, the three-dimensional point cloud data and the walking route of the museum, so that the map building process can be clear, the three-dimensional map of the museum can be displayed, and the technical problem that the three-dimensional map cannot be checked is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings needed in the description of the embodiments, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating an embodiment of a method for generating an AR navigation map according to the present application;
FIG. 2 is a schematic diagram of an embodiment of an image pair of a projection photograph with a fixed viewing angle according to the present application;
FIG. 3 is a flow chart of one embodiment of a two-dimensional point pair of an image pair provided by the present application;
FIG. 4 is a schematic diagram illustrating an embodiment of an AR navigation map generating apparatus according to the present application;
fig. 5 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor systems and/or microcontroller systems.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The embodiment of the application provides an AR navigation map generation method and device, which are respectively described below.
Fig. 1 is a flowchart of an embodiment of an AR navigation map generating method according to the present application, where, as shown in fig. 1, the AR navigation map generating method includes:
s101, acquiring a panoramic video of a target building, and performing frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
s102, acquiring three-dimensional point cloud data of the target building according to the preset number of sites and the set laser radars; constructing a road network according to the walking route in the target building;
s103, performing feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and performing three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
s104, performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model;
s105, generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
Compared with the prior art, the AR navigation map generation method provided by the embodiment of the application can be used for positioning data through the panoramic video and the three-dimensional point cloud data of the museum, so that the positioning effect is improved. Furthermore, the target building AR navigation map can be generated by processing the panoramic video of the three-dimensional reconstruction model, the three-dimensional point cloud data and the walking route of the museum, so that the map building process can be clear, the three-dimensional map of the museum can be displayed, and the technical problem that the three-dimensional map cannot be checked is solved.
It should be understood that: the manner of acquiring the panoramic video of the target building in step S101 may be to acquire the panoramic video according to the panoramic camera, or may be to retrieve the panoramic video stored in the history from the storage medium, and the target building may be a museum.
In an embodiment of the application, all routes available for visitors to walk in the museum may be collected, and the road network is constructed according to the routes.
In the embodiment of the application, the travelling route for shooting the panoramic video can be determined according to actual requirements, the required route can shoot all scenes in the library, the repeated path is reduced as much as possible, then the parameters of the panoramic camera can be set, and then the panoramic video is shot according to the travelling route.
In some embodiments of the present application, step S101 includes:
performing frame extraction processing on the panoramic video according to preset requirements to obtain all photos;
screening all the photos to obtain reserved photos;
projecting the reserved photo to obtain a projection photo with a fixed visual angle;
and obtaining the AR navigation image library according to the projection photo with the fixed visual angle.
It should be noted that: the preset requirement may be to require that all photos after frame extraction contain all scenes, but not too densely between each scene. The filtering processing of all the photos can be used for removing photos shot in the repeated path, and the photos can also be ensured to contain the whole target range. Wherein, all target ranges can be set according to actual conditions, and the embodiment of the application is not limited herein. In the embodiment of the present application, the total target range may be all the ranges of the museum.
In a specific embodiment of the present application, the obtained projection photograph with a fixed viewing angle may be a projection image with a fixed viewing angle, and the projection image with a fixed viewing angle may be an actual image that is visually simulated to be observed from the center of the camera at the angle, so as to obtain an actual image corresponding to the retained photograph, and obtain an AR navigation image library including all the actual images.
In some embodiments of the present application, step S102 includes:
acquiring a preset number of target sites according to the repeatability among the preset number of sites in the panoramic video;
setting laser radar parameters corresponding to each target site according to the position of each target site in the target building;
and acquiring the three-dimensional point cloud data of the target building according to the position corresponding to each target site and the laser radar parameters.
In the specific embodiment of the application, the range of a museum is large, the point cloud data of the whole range cannot be measured by an instrument at one time, a plurality of stations are selected in the museum, the point cloud data which can be obtained by taking the point as the center is measured, and one station is usually arranged at a certain distance or at a corner, so that two point clouds can be fused together through the overlapping range between the point clouds measured by the two stations, and the point cloud data of the whole range of the whole museum can be obtained. The whole museum can be analyzed according to the panoramic video, and the preset number of target sites which can be used can be determined. Because the angle ranges of the radars of different sites and the like need to be considered according to actual conditions, the laser radar parameters corresponding to each target site need to be set according to the position of each target site in the museum, so that three-dimensional point cloud data can be obtained according to the laser radar parameters of each target site.
In some embodiments of the present application, step S103 includes:
performing first feature extraction on the projection photo with the fixed visual angle in the AR navigation image library to form an image retrieval feature library;
matching first features corresponding to the projection photos of each fixed view angle in the image retrieval feature library to obtain the similarity between the projection photos of each fixed view angle and the projection photos of other fixed view angles in the image retrieval feature library;
determining the first projection photo with the similarity larger than a first similarity threshold value as an image pair to obtain at least one image pair; each image pair comprises at least two projection photographs of fixed viewing angles for calculating the similarity;
and carrying out feature matching on each image pair to obtain the matching result.
It should be noted that: the first feature may be NetVLAD feature, and NetVLAD features of all the projection photographs at the fixed viewing angles in the AR navigation image library may be extracted, so that an image retrieval feature library may be formed according to all the NetVLAD features, then the NetVLAD feature corresponding to the projection photograph at each fixed viewing angle in the AR navigation image library is matched with the NetVLAD feature of the projection photographs at other fixed viewing angles, and an image pair corresponding to the projection photograph at each fixed viewing angle is determined, and the projection photograph at each fixed viewing angle may be successfully matched with the projection photograph at least one other fixed viewing angle, so that at least one image pair may be obtained.
In the embodiment of the present application, when the AR navigation image library includes the fixed view projection photo 1, the fixed view projection photo 2, the fixed view projection photo 3, & gt, and the fixed view projection photo n, the NetVLAD feature 1 corresponding to the fixed view projection photo 1 and the NetVLAD feature 2 of the fixed view projection photo 2-the NetVLAD feature n of the fixed view projection photo n can be matched, and thus, the image pair of the fixed view projection photo 1 can be obtained, for example, the fixed view projection photo 1 and the fixed view projection photo 3, the fixed view projection photo 1 and the fixed view projection photo 9, and the fixed view projection photo 1 and the fixed view projection photo 14 are identical as shown in fig. 2, and the photographed angles are different, so that the repeatability of the two photos is great. The specific matching manner may be set according to actual situations, and the embodiment of the present application is not limited herein.
In some embodiments of the application, the matching result comprises at least three two-dimensional point pairs;
and performing feature matching on each image pair to obtain the matching result, wherein the feature matching comprises the following steps:
performing second feature extraction on the first projection photo of each image pair to obtain a feature descriptor corresponding to each first projection photo;
and carrying out feature matching on the feature descriptors of the corresponding first projection photos of each image pair to obtain at least three two-dimensional point pairs of each image pair.
It should be noted that: the second feature may be a SuperPoint feature, and after extracting the SuperPoint feature, a feature descriptor corresponding to each SuperPoint feature may be obtained. After the feature descriptors of the projection photos of each fixed view angle in the AR navigation image library are obtained, each feature descriptor can be matched with each pair of image pairs, so that two-dimensional point pairs matched with each other in each image pair can be obtained, and the two-dimensional point pairs can be multiple according to the matching condition.
In a specific embodiment of the present application, after matching the feature descriptors, at least three two-dimensional point pairs of each pair of images may be obtained, for example, at least three two-dimensional point pairs of the projection photograph 1 with a fixed viewing angle and the projection photograph 3 with a fixed viewing angle, at least three two-dimensional point pairs of the projection photograph 1 with a fixed viewing angle and the projection photograph 9 with a fixed viewing angle, and at least three two-dimensional point pairs of the projection photograph 1 with a fixed viewing angle and the projection photograph 14 with a fixed viewing angle, as shown in fig. 3, at least three two-dimensional point pairs corresponding to 2 photographs may be obtained, and each two-dimensional point pair may be represented as two points at two ends of one connecting line.
It should be noted that: in order to obtain an initial three-dimensional reconstruction model, an incremental SFM method can be adopted to reconstruct three-dimensional of at least three two-dimensional point pairs of each pair of image pairs, and a scene result and sparse point clouds are recovered to obtain the initial three-dimensional reconstruction model. In order to create an initial three-dimensional reconstruction model that meets the requirements better, the number of two-dimensional point pairs is not limited to at least three two-dimensional point pairs, and the number of specific two-dimensional point pairs can be determined according to the actual requirements, which is not a limitation in the embodiment of the present application.
In some embodiments of the application, the initial three-dimensional reconstruction model includes a sparse point cloud;
step S104 includes:
obtaining initial coordinates of the sparse point cloud in the first projection photo according to the initial three-dimensional reconstruction model;
and carrying out change processing on the initial three-dimensional reconstruction model according to the initial coordinates and the three-dimensional point cloud data to obtain the target three-dimensional reconstruction model.
It should be noted that: after the initial three-dimensional reconstruction model is obtained, the initial coordinates of the projection photos of all the fixed visual angles in the AR navigation image library can be obtained through the initial three-dimensional reconstruction model.
In some embodiments of the present application, the performing a change process on the initial three-dimensional reconstruction model according to the initial coordinates and the three-dimensional point cloud data to obtain the target three-dimensional reconstruction model includes:
determining at least three control points in the projection photo of the fixed visual angle of the AR navigation image library according to the three-dimensional point cloud data; the three-dimensional point cloud data comprise three-dimensional point cloud coordinates corresponding to each three-dimensional point cloud data;
calculating the three-dimensional point cloud coordinates of each three-dimensional point cloud data according to the at least three control points to obtain real coordinates corresponding to each three-dimensional point cloud data;
and performing scale change processing on the initial coordinates of the initial three-dimensional reconstruction model according to the real coordinates to obtain the target three-dimensional reconstruction model.
It should be noted that: the control point can be selected from all the projection photos with fixed view angles of the AR navigation image library according to the three-dimensional point cloud data, wherein the control point can be selected as easily as possible in the point cloud, and the point can be clearly selected from the projection photos with fixed view angles. Therefore, the real coordinates of the projection pictures of each fixed view angle under the real scale can be calculated according to the control points and the three-dimensional point cloud coordinates carried in the three-dimensional point cloud data, and because the sparse point cloud scale in the initial three-dimensional reconstruction model is free, for example, the real distance 1m can be only 0.5 meter or 2 meters under the scale, the initial three-dimensional reconstruction model can be subjected to scale change according to the real coordinates of the projection pictures of each fixed view angle under the real scale and the initial coordinates of the initial three-dimensional reconstruction model, and the scale and the coordinates of the sparse point cloud are converted into the three-dimensional point cloud data, so that the target three-dimensional reconstruction model can be obtained.
In some embodiments of the present application, step S105 includes:
acquiring the pictures of the preset number of walking routes of the road network and the pictures of the preset number of collection exhibited by the target building;
and importing the preset number of walking route photos and the preset number of collection photos into the target three-dimensional reconstruction model to carry out incremental reconstruction, and generating the target building AR navigation map.
In a specific embodiment of the present application, after obtaining the road network, shooting may be performed at a plurality of positions of the road network to obtain a preset number of photos of the walking route. The camera can be used for shooting, and the preset number of collection photos can be called from the storage medium.
In some embodiments of the present application, the importing the preset number of walking route photos and the preset number of collection photos into the target three-dimensional reconstruction model to perform incremental reconstruction, and generating the target building AR navigation map includes:
importing the preset number of walking route photos into the target three-dimensional reconstruction model for incremental reconstruction to obtain walking route coordinates corresponding to each walking route photo;
connecting the coordinates of the walking route on each route in the road network to obtain all real routes of the target building;
importing the preset number of collection photos into the target three-dimensional reconstruction model to carry out incremental reconstruction, so as to obtain collection position information corresponding to each collection photo;
and generating the target building AR navigation map according to the real lines and the collection position information.
In the specific embodiment of the application, the preset number of walking route photos can be imported into the target three-dimensional reconstruction model for incremental reconstruction, so that walking route coordinates of newly shot photos can be obtained, walking route coordinates corresponding to the walking route photos on each route in the road network can be connected, and all real routes of the museum can be obtained. The method can also collect the preset number of collection photos displayed in the museum, guide the preset number of collection photos into the target three-dimensional reconstruction model for incremental reconstruction, the collection position information corresponding to each collection photo can be obtained, so that the target building AR navigation map can be generated according to all real lines and collection position information of the museum.
According to the embodiment of the application, the mapping process is disclosed according to the actual route of the museum, and the target building AR navigation map is constructed, so that the three-dimensional map of the museum can be displayed.
In order to better implement the AR navigation map generating method in the embodiment of the present application, correspondingly, the embodiment of the present application further provides an AR navigation map generating device, as shown in fig. 4, where the AR navigation map generating device includes:
the image library acquisition module 401 is configured to acquire a panoramic video of a target building, and perform frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
the point cloud data acquisition module 402 is configured to acquire three-dimensional point cloud data of the target building according to the preset number of sites and the set laser radars; constructing a road network according to the walking route in the target building;
the feature matching module 403 is configured to perform feature matching on the projection photographs of the fixed viewing angles in the AR navigation image library to obtain a matching result, and perform three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
the scale change module 404 is configured to perform scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data, so as to obtain a target three-dimensional reconstruction model;
and the map determining module 405 is configured to generate a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
The AR navigation map generating apparatus provided in the foregoing embodiment may implement the technical solution described in the foregoing AR navigation map generating method embodiment, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing AR navigation map generating method embodiment, which is not described herein again.
As shown in fig. 5, the present application further provides an electronic device 500 accordingly. The electronic device 500 comprises a processor 501, a memory 502 and a display 503. Fig. 5 shows only some of the components of the electronic device 500, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead.
The memory 502 may be an internal storage unit of the electronic device 500 in some embodiments, such as a hard disk or memory of the electronic device 500. The memory 502 may also be an external storage device of the electronic device 500 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 500.
Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 500. The memory 502 is used for storing application software and various types of data for installing the electronic device 500.
The processor 501 may be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip in some embodiments for executing program code or processing data stored in the memory 502, such as the AR navigation map generating method of the present application.
The display 503 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 503 is for displaying information at the electronic device 500 and for displaying a visual user interface. The components 501-503 of the electronic device 500 communicate with each other via a system bus.
In some embodiments of the present application, when the processor 501 executes the AR navigation map generation program in the memory 502, the following steps may be implemented:
acquiring a panoramic video of a target building, and performing frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
acquiring three-dimensional point cloud data of the target building according to the preset number of stations and the set laser radars; constructing a road network according to the walking route in the target building;
performing feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and performing three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model;
and generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
It should be understood that: the processor 501 may perform other functions in addition to the above functions when executing the AR navigation map generating program in the memory 502, and in particular, reference may be made to the description of the corresponding method embodiments above.
Further, the type of the electronic device 500 is not particularly limited, and the electronic device 500 may be a portable electronic device such as a mobile phone, a tablet computer, a personal digital assistant (personal digitalassistant, PDA), a wearable device, a laptop (laptop), etc. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that carry IOS, android, microsoft or other operating systems. The portable electronic device described above may also be other portable electronic devices, such as a laptop computer (laptop) or the like having a touch-sensitive surface, e.g. a touch panel. It should also be appreciated that in other embodiments of the application, electronic device 500 may not be a portable electronic device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch panel).
Correspondingly, the embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium is used for storing a computer readable program or instruction, and when the program or instruction is executed by a processor, the steps or functions of the AR navigation map generating method provided by the above method embodiments can be realized.
Those skilled in the art will appreciate that all or part of the flow of the methods of the embodiments described above may be accomplished by way of a computer program stored in a computer readable storage medium to instruct related hardware (e.g., a processor, a controller, etc.). The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The foregoing describes the AR navigation map generating method and apparatus provided by the present application in detail, and specific examples are applied to illustrate the principles and embodiments of the present application, and the above description of the examples is only for helping to understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. An AR navigation map generating method, comprising:
acquiring a panoramic video of a target building, and performing frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
acquiring three-dimensional point cloud data of the target building according to the preset number of stations and the set laser radars; constructing a road network according to the walking route in the target building;
performing feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and performing three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model;
and generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
2. The AR navigation map generating method according to claim 1, wherein the performing frame extraction processing on the panoramic video to obtain an AR navigation image library includes:
performing frame extraction processing on the panoramic video according to preset requirements to obtain all photos;
screening all the photos to obtain reserved photos;
projecting the reserved photo to obtain a projection photo with a fixed visual angle;
and obtaining the AR navigation image library according to the projection photo with the fixed visual angle.
3. The AR navigation map generating method according to claim 1, wherein the obtaining three-dimensional point cloud data of the target building according to the preset number of sites and the set lidar includes:
acquiring a preset number of target sites according to the repeatability among the preset number of sites in the panoramic video;
setting laser radar parameters corresponding to each target site according to the position of each target site in the target building;
and acquiring the three-dimensional point cloud data of the target building according to the position corresponding to each target site and the laser radar parameters.
4. The AR navigation map generating method according to claim 1, wherein the feature matching the projection photographs of the fixed viewing angle in the AR navigation image library to obtain a matching result includes:
performing first feature extraction on the projection photo with the fixed visual angle in the AR navigation image library to form an image retrieval feature library;
matching first features corresponding to the projection photos of each fixed view angle in the image retrieval feature library to obtain the similarity between the projection photos of each fixed view angle and the projection photos of other fixed view angles in the image retrieval feature library;
determining the first projection photo with the similarity larger than a first similarity threshold value as an image pair to obtain at least one image pair; each image pair comprises at least two projection photographs of fixed viewing angles for calculating the similarity;
and carrying out feature matching on each image pair to obtain the matching result.
5. The AR navigation map generating method according to claim 4, wherein the matching result includes at least three two-dimensional point pairs;
and performing feature matching on each image pair to obtain the matching result, wherein the feature matching comprises the following steps:
performing second feature extraction on the first projection photo of each image pair to obtain a feature descriptor corresponding to each first projection photo;
and carrying out feature matching on the feature descriptors of the corresponding first projection photos of each image pair to obtain at least three two-dimensional point pairs of each image pair.
6. The AR navigation map generation method of claim 5, wherein the initial three-dimensional reconstruction model includes a sparse point cloud;
the step of performing scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model comprises the following steps:
obtaining initial coordinates of the sparse point cloud in the first projection photo according to the initial three-dimensional reconstruction model;
and carrying out change processing on the initial three-dimensional reconstruction model according to the initial coordinates and the three-dimensional point cloud data to obtain the target three-dimensional reconstruction model.
7. The AR navigation map generating method according to claim 6, wherein the performing a change process on the initial three-dimensional reconstruction model according to the initial coordinates and the three-dimensional point cloud data to obtain the target three-dimensional reconstruction model includes:
determining at least three control points in the projection photo of the fixed visual angle of the AR navigation image library according to the three-dimensional point cloud data; the three-dimensional point cloud data comprise three-dimensional point cloud coordinates corresponding to each three-dimensional point cloud data;
calculating the three-dimensional point cloud coordinates of each three-dimensional point cloud data according to the at least three control points to obtain real coordinates corresponding to each three-dimensional point cloud data;
and performing scale change processing on the initial coordinates of the initial three-dimensional reconstruction model according to the real coordinates to obtain the target three-dimensional reconstruction model.
8. The AR navigation map generating method according to claim 1, wherein said generating a target building AR navigation map from said road network and said target three-dimensional reconstruction model comprises:
acquiring the pictures of the preset number of walking routes of the road network and the pictures of the preset number of collection exhibited by the target building;
and importing the preset number of walking route photos and the preset number of collection photos into the target three-dimensional reconstruction model to carry out incremental reconstruction, and generating the target building AR navigation map.
9. The AR navigation map generating method according to claim 8, wherein the importing the preset number of walking route photos and the preset number of collection photos into the target three-dimensional reconstruction model for incremental reconstruction, generating the target building AR navigation map, includes:
importing the preset number of walking route photos into the target three-dimensional reconstruction model for incremental reconstruction to obtain walking route coordinates corresponding to each walking route photo;
connecting the coordinates of the walking route on each route in the road network to obtain all real routes of the target building;
importing the preset number of collection photos into the target three-dimensional reconstruction model to carry out incremental reconstruction, so as to obtain collection position information corresponding to each collection photo;
and generating the target building AR navigation map according to the real lines and the collection position information.
10. An AR navigation map generating apparatus, comprising:
the image library acquisition module is used for acquiring a panoramic video of a target building, and performing frame extraction processing on the panoramic video to obtain an AR navigation image library; the panoramic video comprises a preset number of sites; the AR navigation image library comprises projection photos with fixed visual angles;
the point cloud data acquisition module is used for acquiring three-dimensional point cloud data of the target building according to the preset number of sites and the set laser radars; constructing a road network according to the walking route in the target building;
the feature matching module is used for carrying out feature matching on the projection pictures with the fixed visual angles in the AR navigation image library to obtain a matching result, and carrying out three-dimensional reconstruction on the matching result to obtain an initial three-dimensional reconstruction model;
the scale change module is used for carrying out scale change on the initial three-dimensional reconstruction model according to the three-dimensional point cloud data to obtain a target three-dimensional reconstruction model;
and the map determining module is used for generating a target building AR navigation map according to the road network and the target three-dimensional reconstruction model.
CN202311132212.0A 2023-09-05 2023-09-05 AR navigation map generation method and device Active CN116858215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311132212.0A CN116858215B (en) 2023-09-05 2023-09-05 AR navigation map generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311132212.0A CN116858215B (en) 2023-09-05 2023-09-05 AR navigation map generation method and device

Publications (2)

Publication Number Publication Date
CN116858215A true CN116858215A (en) 2023-10-10
CN116858215B CN116858215B (en) 2023-12-05

Family

ID=88223820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311132212.0A Active CN116858215B (en) 2023-09-05 2023-09-05 AR navigation map generation method and device

Country Status (1)

Country Link
CN (1) CN116858215B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274558A (en) * 2023-11-22 2023-12-22 湖北珞珈实验室 AR navigation method, device and equipment for visual positioning and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160062261A (en) * 2014-11-24 2016-06-02 현대엠엔소프트 주식회사 Vehicle navigation system and control method thereof
US20200217666A1 (en) * 2016-03-11 2020-07-09 Kaarta, Inc. Aligning measured signal data with slam localization data and uses thereof
CN111599001A (en) * 2020-05-14 2020-08-28 星际(重庆)智能装备技术研究院有限公司 Unmanned aerial vehicle navigation map construction system and method based on image three-dimensional reconstruction technology
CN112927362A (en) * 2021-04-07 2021-06-08 Oppo广东移动通信有限公司 Map reconstruction method and device, computer readable medium and electronic device
CN113126117A (en) * 2021-04-15 2021-07-16 湖北亿咖通科技有限公司 Method for determining absolute scale of SFM map and electronic equipment
CN113340312A (en) * 2021-08-05 2021-09-03 中铁建工集团有限公司 AR indoor live-action navigation method and system
WO2021208372A1 (en) * 2020-04-14 2021-10-21 北京迈格威科技有限公司 Indoor visual navigation method, apparatus, and system, and electronic device
CN114202819A (en) * 2021-11-29 2022-03-18 山东恒创智控科技有限公司 Robot-based substation inspection method and system and computer
CN114332348A (en) * 2021-11-16 2022-04-12 西南交通大学 Three-dimensional reconstruction method for track integrating laser radar and image data
CN114549738A (en) * 2022-01-07 2022-05-27 北京理工大学重庆创新中心 Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
WO2022126529A1 (en) * 2020-12-17 2022-06-23 深圳市大疆创新科技有限公司 Positioning method and device, and unmanned aerial vehicle and storage medium
WO2022183657A1 (en) * 2021-03-04 2022-09-09 浙江商汤科技开发有限公司 Point cloud model construction method and apparatus, electronic device, storage medium, and program
CN115527011A (en) * 2022-08-25 2022-12-27 阿里巴巴达摩院(杭州)科技有限公司 Navigation method and device based on three-dimensional model
WO2023001251A1 (en) * 2021-07-22 2023-01-26 梅卡曼德(北京)机器人科技有限公司 Dynamic picture-based 3d point cloud processing method and apparatus, device and medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160062261A (en) * 2014-11-24 2016-06-02 현대엠엔소프트 주식회사 Vehicle navigation system and control method thereof
US20200217666A1 (en) * 2016-03-11 2020-07-09 Kaarta, Inc. Aligning measured signal data with slam localization data and uses thereof
WO2021208372A1 (en) * 2020-04-14 2021-10-21 北京迈格威科技有限公司 Indoor visual navigation method, apparatus, and system, and electronic device
CN111599001A (en) * 2020-05-14 2020-08-28 星际(重庆)智能装备技术研究院有限公司 Unmanned aerial vehicle navigation map construction system and method based on image three-dimensional reconstruction technology
WO2022126529A1 (en) * 2020-12-17 2022-06-23 深圳市大疆创新科技有限公司 Positioning method and device, and unmanned aerial vehicle and storage medium
WO2022183657A1 (en) * 2021-03-04 2022-09-09 浙江商汤科技开发有限公司 Point cloud model construction method and apparatus, electronic device, storage medium, and program
CN112927362A (en) * 2021-04-07 2021-06-08 Oppo广东移动通信有限公司 Map reconstruction method and device, computer readable medium and electronic device
CN113126117A (en) * 2021-04-15 2021-07-16 湖北亿咖通科技有限公司 Method for determining absolute scale of SFM map and electronic equipment
WO2023001251A1 (en) * 2021-07-22 2023-01-26 梅卡曼德(北京)机器人科技有限公司 Dynamic picture-based 3d point cloud processing method and apparatus, device and medium
CN113340312A (en) * 2021-08-05 2021-09-03 中铁建工集团有限公司 AR indoor live-action navigation method and system
CN114332348A (en) * 2021-11-16 2022-04-12 西南交通大学 Three-dimensional reconstruction method for track integrating laser radar and image data
CN114202819A (en) * 2021-11-29 2022-03-18 山东恒创智控科技有限公司 Robot-based substation inspection method and system and computer
CN114549738A (en) * 2022-01-07 2022-05-27 北京理工大学重庆创新中心 Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
CN115527011A (en) * 2022-08-25 2022-12-27 阿里巴巴达摩院(杭州)科技有限公司 Navigation method and device based on three-dimensional model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHAO, PC , HU, QW .ETC: "Panoramic Image and Three-Axis Laser Scanner Integrated Approach for Indoor 3D Mapping", REMOTE SENSING *
庞静等: "增强现实地图研究与应用", 测绘地理信息, vol. 46, no. 1, pages 130 - 132 *
张辉;王盼;肖军浩;卢惠民;: "一种基于三维建图和虚拟现实的人机交互系统", 控制与决策, no. 11 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274558A (en) * 2023-11-22 2023-12-22 湖北珞珈实验室 AR navigation method, device and equipment for visual positioning and storage medium
CN117274558B (en) * 2023-11-22 2024-02-13 湖北珞珈实验室 AR navigation method, device and equipment for visual positioning and storage medium

Also Published As

Publication number Publication date
CN116858215B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN111174799B (en) Map construction method and device, computer readable medium and terminal equipment
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN111627114A (en) Indoor visual navigation method, device and system and electronic equipment
CN112288853B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium
CN104180814A (en) Navigation method in live-action function on mobile terminal, and electronic map client
CN110296686B (en) Vision-based positioning method, device and equipment
WO2021027692A1 (en) Visual feature library construction method and apparatus, visual positioning method and apparatus, and storage medium
EP4050305A1 (en) Visual positioning method and device
CN116858215B (en) AR navigation map generation method and device
CN107084740B (en) Navigation method and device
CN111833447A (en) Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN112927363A (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN112714266A (en) Method and device for displaying label information, electronic equipment and storage medium
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN109034214B (en) Method and apparatus for generating a mark
CN112509135B (en) Element labeling method, element labeling device, element labeling equipment, element labeling storage medium and element labeling computer program product
CN109816791B (en) Method and apparatus for generating information
US9811889B2 (en) Method, apparatus and computer program product for generating unobstructed object views
CN110827340B (en) Map updating method, device and storage medium
CN114089836B (en) Labeling method, terminal, server and storage medium
CN112887793B (en) Video processing method, display device, and storage medium
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN115114302A (en) Road sign data updating method and device, electronic equipment and storage medium
CN111882675A (en) Model presentation method and device, electronic equipment and computer storage medium
CN117274558B (en) AR navigation method, device and equipment for visual positioning and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant