CN115526997A - Visual map construction, positioning and navigation method and system, and computer storage medium - Google Patents

Visual map construction, positioning and navigation method and system, and computer storage medium Download PDF

Info

Publication number
CN115526997A
CN115526997A CN202211158521.0A CN202211158521A CN115526997A CN 115526997 A CN115526997 A CN 115526997A CN 202211158521 A CN202211158521 A CN 202211158521A CN 115526997 A CN115526997 A CN 115526997A
Authority
CN
China
Prior art keywords
dimensional
coordinate
visual map
real
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211158521.0A
Other languages
Chinese (zh)
Inventor
刘力
张小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionstar Information Technology Shanghai Co ltd
Original Assignee
Visionstar Information Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionstar Information Technology Shanghai Co ltd filed Critical Visionstar Information Technology Shanghai Co ltd
Priority to CN202211158521.0A priority Critical patent/CN115526997A/en
Publication of CN115526997A publication Critical patent/CN115526997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a visual map construction, positioning and navigation method and system and a computer storage medium. The construction method comprises the following steps: acquiring a live-action image set and a special control point set of a closed environment; constructing a three-dimensional visual map; acquiring a two-dimensional plane map of the closed environment; calculating a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set so as to establish a coordinate element corresponding relation between the three-dimensional visual map and the two-dimensional plane map; the navigation method comprises the following steps: solving the pose of the equipment to form a real-time three-dimensional coordinate set; aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by utilizing the corresponding relation of the coordinate elements to obtain a real-time two-dimensional coordinate set; and planning a path through the real-time two-dimensional coordinate set and the target navigation position to form a navigation path. The invention can improve the accuracy and robustness of positioning and navigation of the user equipment in a closed environment.

Description

Visual map construction, positioning and navigation method and system, and computer storage medium
Technical Field
The invention mainly relates to the technical field of computer maps, in particular to a visual map construction, positioning and navigation method and system and a computer storage medium.
Background
In recent years, location Based Services (LBS) have been widely used, and one of the core elements in these applications is to obtain the location of a user equipment. In an outdoor environment, a satellite positioning system such as a GPS (global positioning system) and a Beidou can be used for obtaining the position of a user, the positioning systems can generally provide more accurate position precision, and App such as a Baidu map becomes a necessary application in daily life of people. However, in an indoor environment, signals of positioning systems such as a GPS and a beidou are generally weak, and positioning errors are large, so that the indoor navigation requirements of user equipment cannot be met.
In the prior art, positioning a user equipment in a closed environment such as an indoor environment is generally based on a reference object, such as: wireless network equipment such as WIFI, bluetooth and 5G, or special visual markers such as two-dimensional codes arranged in a closed environment, and the spatial position of the user equipment is indirectly obtained by calculating the distance between the user equipment and the reference objects.
However, the indoor positioning scheme based on wireless network devices such as WIFI, bluetooth and 5G is greatly influenced by the strength of wireless signals. In closed environments such as indoor environment and the like, the signal is easily weakened due to dense people flow or blocking, so that the position accuracy obtained by the scheme is poor, and the positioning result is not robust enough.
The positioning success rate and the positioning accuracy of the visual positioning scheme based on special visual markers such as two-dimensional codes greatly depend on the coverage degree of the markers in the whole environment. The camera of the user device needs to be able to capture the marker and, as far as possible, not to make the proportion of the marker in the picture too small for the positioning system to accurately recognize it. This can receive many restrictions in practical application, if the marker can't be arranged in many places, the marker is difficult easily to shoot when the user uses, leads to positioning system can't realize the accurate positioning under the closed environment comprehensively.
Disclosure of Invention
The technical problem solved by the technical scheme of the invention is as follows: how to improve the accuracy and robustness of positioning and navigation of user equipment in a closed environment and realize map construction, positioning and navigation of the closed environment.
In order to solve the technical problem, the technical scheme of the invention provides a visual map construction method suitable for a closed environment, which comprises the following steps:
acquiring a live-action image set and a special control point set of a closed environment;
constructing a three-dimensional visual map based on the live-action image set, and obtaining a special three-dimensional coordinate set of the special control point set corresponding to the three-dimensional visual map; the three-dimensional visual map includes: the system comprises a three-dimensional image characteristic set and a three-dimensional point cloud coordinate set corresponding to the three-dimensional image characteristic set;
acquiring a two-dimensional plane map of the closed environment, and acquiring a special two-dimensional coordinate set of the special control point set corresponding to the two-dimensional plane map; the two-dimensional visual map includes: the two-dimensional image coordinate set;
calculating a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set to establish a coordinate element corresponding relation between the three-dimensional visual map and the two-dimensional plane map;
and forming a visual map file based on the three-dimensional visual map, the two-dimensional plane map and the corresponding relation of the coordinate elements between the three-dimensional visual map and the two-dimensional plane map.
Optionally, the acquiring the live-action image set of the closed environment includes: shooting a plurality of live-action images, and forming the live-action image set by using the live-action images.
Optionally, the acquiring the live-action image set of the closed environment includes: shooting a live-action video, extracting a plurality of image frames from the live-action video, and forming the live-action image set by using the live-action image frames.
Optionally, the special control point set is formed by pre-selecting a specific position of the enclosed space.
Optionally, the constructing a three-dimensional visual map based on the real-scene image set includes:
extracting a corresponding three-dimensional image feature set from the live-action image set;
and outputting a three-dimensional point cloud coordinate set corresponding to the three-dimensional image feature set and the three-dimensional visual map model by utilizing a motion recovery structure algorithm or a visual SLAM technology.
Optionally, the special control point set includes: 1 st to Nth special control points, wherein N is a natural number greater than 1;
the special three-dimensional coordinate set includes: three-dimensional coordinates of the 1 st to Nth special control points in the three-dimensional visual map coordinate system, namely { (x) i ,y i ,z i )|i∈[1,N]};x i Is the coordinate, y, of the ith special control point in the x-axis direction of the three-dimensional visual map coordinate system i Is the coordinate of the ith special control point in the y-axis direction of the three-dimensional visual map coordinate system, z i The coordinate of the ith special control point in the z-axis direction of the three-dimensional visual map coordinate system is shown;
the special two-dimensional coordinate set includes: two-dimensional coordinates of the 1 st to Nth special control points in the two-dimensional plane map coordinate system, namely { (u) i ,v i )|i∈[1,N]};u i Is the coordinate, v, of the ith special control point in the u-axis direction of the two-dimensional plane map coordinate system i The coordinate of the ith special control point in the v-axis direction of the two-dimensional plane map coordinate system;
said computing a transformation matrix between said set of special three-dimensional coordinates and a set of special two-dimensional coordinates comprises:
setting the z-axis direction of the three-dimensional visual map coordinate system as the height direction of the three-dimensional visual map, R as a rotation matrix, T as a translation vector, c as a scaling coefficient, and T as the transformation matrix, and obtaining a T value based on the formula (1) and the formula (2):
the formula (1) is:
Figure BDA0003858392530000031
the formula (2) is:
Figure BDA0003858392530000032
e is
Figure BDA0003858392530000033
The calculated value of (a).
Optionally, the obtaining the T value based on equations (1) and (2) includes: solving for e value minimization to obtain a T value.
In order to solve the above technical problem, the technical solution of the present invention further provides a visual map positioning method suitable for a closed environment, where the visual map constructed based on the method includes:
acquiring an equipment observation image;
extracting an observation image characteristic set of the observation image;
carrying out image feature matching on the observation image feature set and the three-dimensional image feature set to obtain a three-dimensional image matching feature set;
extracting a three-dimensional point cloud matching coordinate set corresponding to the three-dimensional image matching feature set from the three-dimensional point cloud coordinate set;
and solving the equipment pose based on the three-dimensional image matching feature set and the three-dimensional point cloud matching coordinate set.
In order to solve the above technical problem, the technical solution of the present invention further provides a visual map navigation method suitable for a closed environment, where the visual map constructed based on the method includes:
solving the pose of the equipment based on the method, and forming a real-time three-dimensional coordinate set of the equipment in the three-dimensional visual map;
aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by using the corresponding relation of the coordinate elements to obtain a real-time two-dimensional coordinate set;
acquiring a target navigation position;
performing path planning through the real-time two-dimensional coordinate set and the target navigation position to form a corresponding navigation two-dimensional coordinate set;
and forming a navigation path by using the navigation two-dimensional coordinate set.
Optionally, the visual map navigation method further includes:
forming a device plane track based on the real-time two-dimensional coordinate set;
if the current equipment plane track and the current navigation path generate a first offset and the first offset exceeds a first preset offset, then:
if the current equipment plane track and the equipment plane track formed last time generate a second offset and the second offset exceeds a second preset offset, then:
solving the current pose of the equipment again;
updating the real-time three-dimensional coordinate set, the real-time two-dimensional coordinate set and the plane track of the equipment according to the current pose of the equipment;
if the second offset is generated between the current device plane trajectory and the last device plane trajectory but the second offset does not exceed the second preset offset, then:
and correcting the path plan through the current real-time two-dimensional coordinate set and the target navigation position so as to update the corresponding navigation two-dimensional coordinate set.
Optionally, the real-time three-dimensional coordinate set includes: real-time three-dimensional coordinates of the 1 st or 1 st to Wth equipment are formed during navigation or positioning, wherein W is a natural number greater than 1;
the real-time two-dimensional coordinate set comprises: forming a 1 st or 1 st to Wth equipment real-time two-dimensional coordinate during navigation;
the navigating the set of two-dimensional coordinates comprises: f th 1 To f F A navigation two-dimensional coordinate, F is a natural number greater than 1, F 1 The navigation two-dimensional coordinate is the current real-time two-dimensional coordinate during path planning, f F And the navigation two-dimensional coordinate is the target navigation position.
Optionally, the visual map navigation method further includes:
comparing the current real-time two-dimensional coordinate with the current navigation two-dimensional coordinate;
if the current real-time two-dimensional coordinate and the current navigation two-dimensional coordinate generate a third coordinate offset and the third coordinate offset exceeds a third preset coordinate offset, then:
if the current real-time two-dimensional coordinate and the last formed real-time two-dimensional coordinate generate a fourth coordinate offset and the fourth coordinate offset exceeds a fourth preset coordinate offset, then:
solving the current pose of the equipment again;
updating the current real-time three-dimensional coordinate and the real-time two-dimensional coordinate according to the current pose of the equipment so as to update the real-time three-dimensional coordinate set and the real-time two-dimensional coordinate set;
if the current real-time two-dimensional coordinate and the last formed real-time two-dimensional coordinate generate a fourth coordinate offset and the fourth coordinate offset exceeds the fourth preset coordinate offset, then:
and correcting the path plan through the current real-time two-dimensional coordinate and the target navigation position so as to update the corresponding navigation two-dimensional coordinate set.
Optionally, the coordinate offset is calculated based on at least one of a coordinate position offset calculation and a coordinate angle offset calculation.
Optionally, setting the transformation matrix as T, the real-time three-dimensional coordinate set as P, and the real-time two-dimensional coordinate set as Q;
the aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by using the coordinate element correspondence to obtain a real-time two-dimensional coordinate set includes:
obtaining a real-time two-dimensional coordinate set Q based on formula (3):
the equation (3) is: q = T · P.
In order to solve the above technical problem, the present invention further provides a visual map construction system suitable for a closed environment, including: a processor and a memory;
the memory has stored therein a computer program which, when executed by the processor, performs the steps of the visual mapping method as described above.
In order to solve the above technical problem, the present invention further provides a visual map positioning system suitable for a closed environment, including: a processor and a memory;
the memory has stored therein a computer program which, when executed by the processor, performs the steps of the visual map location method as described above.
In order to solve the above technical problem, the present invention further provides a visual map navigation system suitable for a closed environment, including: a processor and a memory;
the memory has stored therein a computer program which, when executed by the processor, performs the steps of the visual map navigation method as described above.
In order to solve the above technical problem, the technical solution of the present invention further provides a visual map construction system suitable for a closed environment, including:
the first acquisition unit is suitable for acquiring a live-action image set and a special control point set of a closed environment;
the three-dimensional map building unit is suitable for building a three-dimensional visual map based on the live-action image set and obtaining a special three-dimensional coordinate set of the special control point set corresponding to the three-dimensional visual map; the three-dimensional visual map includes: the three-dimensional image feature set and a three-dimensional point cloud coordinate set corresponding to the three-dimensional image feature set;
the two-dimensional map construction unit is suitable for acquiring a two-dimensional plane map of the closed environment and acquiring a special two-dimensional coordinate set of the special control point set corresponding to the two-dimensional plane map; the two-dimensional visual map includes: the two-dimensional image coordinate set;
the computing unit is suitable for computing a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set so as to establish a coordinate element corresponding relation between the three-dimensional point cloud coordinate set and the two-dimensional image coordinate set;
and the first output unit is suitable for forming a visual map file based on the three-dimensional visual map, the two-dimensional plane map and the corresponding relationship of the coordinate elements between the three-dimensional point cloud coordinate set and the two-dimensional image coordinate set.
In order to solve the above technical problem, the present invention further provides a visual map positioning system suitable for a closed environment, which is based on the above visual map construction system, and includes:
the second acquisition unit is suitable for acquiring an equipment observation image;
the first extraction unit is suitable for extracting an observation image characteristic set of the observation image;
the matching unit is suitable for performing image feature matching on the observation image feature set and the three-dimensional image feature set to obtain a three-dimensional image matching feature set;
the second extraction unit is suitable for extracting a three-dimensional point cloud matching coordinate set corresponding to the three-dimensional image matching feature set from the three-dimensional point cloud coordinate set;
and the second output unit is suitable for solving the equipment pose based on the three-dimensional image matching feature set and the three-dimensional point cloud matching coordinate set.
In order to solve the above technical problem, a visual map navigation system suitable for a closed environment is further provided in the technical solution of the present invention, and based on the above visual map construction system, the visual map navigation system includes:
a positioning unit adapted to form a real-time three-dimensional coordinate set of a device in the three-dimensional visual map using a visual map positioning system as described above;
the conversion unit is suitable for aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by utilizing the corresponding relation of the coordinate elements to obtain a real-time two-dimensional coordinate set;
the third acquisition unit is suitable for acquiring a target navigation position;
the path planning unit is suitable for planning a path through the real-time two-dimensional coordinate set and the target navigation position to form a corresponding navigation two-dimensional coordinate set;
and the third output unit is suitable for forming a navigation path by utilizing the navigation two-dimensional coordinate set.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the visual map construction method as described above.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the visual map positioning method as described above.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the visual map navigation method as described above.
The technical scheme of the invention at least comprises the following beneficial effects:
the technical scheme of the invention provides a visual map construction method suitable for closed environment aiming at closed environment such as indoor environment, and the method comprises the steps of constructing a three-dimensional visual map of the closed environment and a special three-dimensional coordinate set and a special two-dimensional coordinate set respectively corresponding to a three-dimensional visual image coordinate system and a two-dimensional plane map coordinate system by acquiring a live-action image, a two-dimensional plane map and special control points of the closed environment, and solving the corresponding relation between the position of the three-dimensional visual map and the position of the two-dimensional plane map by utilizing the transformation relation between the special three-dimensional coordinate set and the special two-dimensional coordinate set, thereby constructing the visual map suitable for positioning and navigation of user equipment in the closed environment.
The technical scheme of the invention solves the problems of accurate positioning and navigation of the user equipment in the closed environment from the perspective of visual map construction. Because the three-dimensional visual map is constructed by shooting the live-action image and presetting the special control points, the three-dimensional visual map can be corrected and aligned to the two-dimensional plane map in real time based on the live-action image, and the visual map has higher robustness and is slightly influenced by the outside.
According to the technical scheme, the visual map is positioned in a mode of combining the three-dimensional visual map and the two-dimensional plane map, and the problem of dependence of auxiliary reference objects in a closed environment can be solved. The user equipment only needs to acquire the observation image, and the equipment pose can be acquired through image feature matching of the observation image. By aligning the device position to the two-dimensional plane map, path planning can be performed based on the two-dimensional plane map and the target position, and accurate navigation of the indoor closed environment is realized.
In the optional technical scheme of the invention, the path deviation can be detected based on the current position and the real-time track of the equipment, and the navigation track and the positioning error can be corrected through deviation detection, so that the accuracy and the robustness of the visual map positioning in the closed environment can be further improved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of a system architecture suitable for use in the present invention;
FIG. 2 is a schematic flow chart of a method for constructing a visual map suitable for a closed environment according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a visual map positioning method suitable for a closed environment according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a visual map navigation method suitable for a closed environment according to a technical solution of the present invention;
fig. 5 is a schematic flow chart of a variation of the visual map navigation method applicable to a closed environment according to the present invention;
FIG. 6 is a schematic structural diagram of a visual map construction system suitable for a closed environment according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a visual map positioning system suitable for a closed environment according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a visual map navigation system suitable for a closed environment according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of another visual mapping system suitable for a closed environment according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of another visual map positioning system suitable for a closed environment according to the present invention;
FIG. 11 is a schematic structural diagram of another visual map navigation system suitable for use in a closed environment according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of another visual map navigation system suitable for a closed environment according to the technical solution of the present invention.
Detailed Description
In order to better and clearly show the technical scheme of the invention, the invention is further described with reference to the attached drawings.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. As used herein, the terms "first" and "second" are used interchangeably to distinguish one element or class of elements from another or class of elements, respectively, and are not intended to denote position or importance of the individual elements.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, and the like, which can perform human-computer interaction with a user through a touch panel, and the mobile electronic product may employ any operating system, such as an android operating system, an IOS operating system, and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The network device comprises but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud formed by a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers.
The network 104 includes, but is not limited to, the internet, a mobile communication network, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless ad hoc network (ad hoc network), etc. For example, the mobile communication network may be a 3G, 4G or 5G mobile communication system, for example, the mobile communication network may be a Wideband Code Division Multiple Access (WCDMA) system, a Frequency Division Multiple Access (FDMA) system, an Orthogonal Frequency-Division Multiple Access (OFDMA) system, a single carrier FDMA (SC-FDMA) system, a General Packet Radio Service (GPRS) system or a Long Term Evolution (LTE) system, and other such communication systems. Of course, those skilled in the art should understand that the above terminal device is only an example, and other existing or future terminal devices may be applicable to the present application and are included in the scope of the present application and are included herein by reference.
The server 105 may be a server, a server cluster composed of several servers, or a cloud computing service center, such as a cloud server. It may also be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the visual map construction method, the visual map positioning method, and the visual map navigation provided in the embodiments of the present application are generally executed by a server, and accordingly, the visual map construction system, the visual map positioning system, and the visual map navigation system are generally disposed in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 is a schematic flow chart of a visual map construction method suitable for a closed environment according to an embodiment of the present application, where the visual map construction method may be executed by a cloud server. The cloud server may be the server 105 in fig. 1.
The enclosed environment generally refers to an indoor environment, or an enclosed building, a mountain area, a forest and other environments with a limited range. Because the closed environments are generally affected by the strength of signals and have more shelters, and the conventional positioning capability of satellite positioning systems such as GPS and Beidou is poor, the technical scheme of the embodiment can be applied, and the remarkable and superior positioning and navigation effects can be generated. However, it is noted that the technical solution of the present embodiment is not limited by a closed environment in principle, and is also applicable to an open environment, but the open environment has strong signal receiving and transmitting capabilities, and the technical effect of the technical solution of the present invention is limited compared with a satellite positioning system.
Referring to fig. 2, a visual map construction method suitable for an enclosed environment includes the following steps:
and step S100, acquiring a live-action image set and a special control point set of the closed environment.
In step S100, the live-action image set is composed of a series of captured images of the closed environment, where the captured images may be image frames formed by the device capturing a video in the closed environment, or live-action images formed by the device capturing a picture in the closed environment. The photographing apparatus may be a smartphone, a motion camera (GoPro, insta360, etc.), or the like that has a camera.
The special control point set is composed of a series of points at specific image positions pre-selected in the closed environment in advance. The pre-selected specific image can be obtained by shooting an identifiable location selected based on the closed environment, and the image characteristics of the location are selected according to the preset location of the shot specific image to form a special control point set. For example, in a closed environment such as a shopping mall, image position locations such as an entrance location, an exit location, an entrance location of each floor, and an entrance location of a parking lot of the shopping mall may be preset as special control points, so that a special control point set is formed. The pre-selected specific image may be derived from a live-action image set of the enclosed environment, or may be otherwise acquired specifically by the device.
In this embodiment, it may be assumed that the live-action image set Px = { Px j |j∈[1,M]I.e. the live-action image set Px is composed of M live-action images Px 1 ,Px 2 ,Px 3 ,…,Px (M-1) ,Px M And (4) forming. M is a natural number greater than 1. M is of an order of magnitude, as desired, and may be at least several thousand.
Accordingly, the method can be used for solving the problems that, A special control point set Py = { Py = } can be set i |i∈[1,N]That is, the special control point set Py is composed of the position point image features Py of N specific images 1 ,Py 2 ,Py 3 ,…,Py (N-1) ,Py N And (4) forming. N is a natural number greater than 1 and less than M. N is determined according to the requirement, and can be tens or hundreds, and the magnitude of N is far smaller than M.
With continuing reference to fig. 2, the visual map construction method of the present embodiment further includes the following steps:
and S101, constructing a three-dimensional visual map based on the live-action image set, and obtaining a special three-dimensional coordinate set of the special control point set corresponding to the three-dimensional visual map.
In step S101, constructing the three-dimensional visual map based on the live-action image set is to restore the three-dimensional scene of the closed environment according to the three-dimensional reconstruction of the live-action image. The process of constructing the three-dimensional visual map is a sparse reconstruction process for acquiring sparse point cloud and pose from an image. The construction method may be a Structure-from-Motion (SFM) algorithm, or a Simultaneous Localization And Mapping (SLAM) algorithm. The process of building a three-dimensional visual map typically includes: and selecting key frames of the image, extracting image characteristics such as color characteristics, texture characteristics, shape characteristics and the like of the key frames, and performing characteristic matching, triangularization and pose recovery. In the process of constructing the three-dimensional visual map, the sparse reconstruction result can be adjusted through a BA optimization algorithm (Bundle Adjustment), so that the re-projection error of the reconstructed image is reduced, and the reconstructed three-dimensional visual map has the highest spatial geometry precision.
The SFM-based algorithm is mainly used for recovering a three-dimensional space structure of a disordered input image, and the traditional method comprises the steps of image-to-image feature matching, eigen matrix estimation, triangularization point cloud, point cloud fusion optimization and the like. The method has the characteristics of off-line, dependence on an image sensor and the like. The SLAM-based algorithm requires that the input images are ordered, and aims to establish an environment model in the motion process and estimate the motion of the user under the condition of no environment prior information. The traditional method comprises the steps of pose prediction, feature matching tracking, local map generation, pose optimization, loop detection and the like. The method has the characteristics of real-time performance, combination with an external sensor such as an IMU unit and the like. Both SFM and SLAM can achieve three-dimensional visual map generation of the environment, but the optimization direction of both is different. In a specific implementation, the respective advantages of the two algorithms can be combined, and the map generation is performed by combining the steps of the two algorithms.
Based on the algorithm, no matter the three-dimensional visual map is constructed by utilizing a motion recovery structure algorithm or a simultaneous positioning and mapping algorithm, the output map file comprises a three-dimensional image feature set and a corresponding three-dimensional point cloud coordinate set.
Let live-action image set Px = { Px = { Px j |j∈[1,M]And constructing a three-dimensional visual map through the step S101, wherein the three-dimensional visual map comprises a corresponding three-dimensional image feature set Sp = { Sp = } j |j∈[1,M]And a three-dimensional point cloud coordinate set Sx = { Sx = j |j∈[1,M]}. Specifically, the method comprises the following steps:
the three-dimensional image feature set Sp includes: corresponding live-action image Px 1 ,Px 2 ,Px 3 ,…,Px (M-1) ,Px M Three-dimensional image feature Sp of 1 ,Sp 2 ,Sp 3 ,…,Sp (M-1) ,Sp M 。Sp 1 Is aimed at the live-action image Px 1 Set of extracted image features, sp 2 Is aimed at the live-action image Px 2 Set of extracted image features, sp 3 Is aimed at the live-action image Px 3 Set of extracted image features, and so on, sp M Is aimed at the live-action image Px M A set of extracted image features.
The three-dimensional point cloud coordinate set Sx comprises: corresponding three-dimensional image features Sp 1 ,Sp 2 ,Sp 3 ,…,Sp (M-1) ,Sp M Three-dimensional point cloud coordinate Sx 1 ,Sx 2 ,Sx 3 ,…,Sx (M-1) ,Sx M 。Sx 1 Is directed to three-dimensional image features Sp 1 Set of reconstructed three-dimensional point cloud data, sx 2 Is directed to three-dimensional image features Sp 2 Set of reconstructed three-dimensional point cloud data, sx 3 Is directed to three-dimensional image features Sp 3 Set of reconstructed three-dimensional point cloud data, and so on, sx M Is directed to three-dimensional image features Sp M A set of reconstructed three-dimensional point cloud data.
In step S100, since the special control point set is composed of a series of position locations of the specific image pre-selected in the closed environment, the position locations of the specific image are set of the extracted image features on the image point. Therefore, in step S101, the special three-dimensional coordinate set corresponding to the special control point set can be obtained in two ways:
in one embodiment, the image features in the three-dimensional image feature set can be matched based on the image features on the specific image position points acquired by the special control point set, and the three-dimensional image features matched with the image features of the special control point set and the three-dimensional point cloud coordinates in the three-dimensional point cloud coordinate set matched with the three-dimensional image features are output to form the special three-dimensional coordinate set.
In another embodiment, based on the specific image corresponding to the specific control point set and the image features on the position points of the specific image, the three-dimensional point cloud coordinates corresponding to the image features are output through a motion recovery structure algorithm or a simultaneous localization and mapping algorithm to form a specific three-dimensional coordinate set.
In the embodiment, the three-dimensional point cloud coordinates are output based on the image features in the special control point set, and a corresponding special three-dimensional coordinate set is formed and is also generated based on the same coordinate system as that of the three-dimensional visual map construction. Thus, the three-dimensional point cloud coordinate set and the special three-dimensional coordinate set are based on the same three-dimensional coordinate system.
Let the special control point set Py = { Py = { Py i |i∈[1,N]A special three-dimensional coordinate set Pz = { Pz) may be obtained through step S101 i |i∈[1,N]}. That is, the special three-dimensional coordinate set Pz corresponds to the special control point Py in the special control point Py (i.e., the image feature of the above specific image position point) Py 1 ,Py 2 ,Py 3 ,…,Py (N-1) ,Py N From special three-dimensional coordinates Pz 1 ,Pz 2 ,Pz 3 ,…,Pz (N-1) ,Pz N And (4) forming. The method specifically comprises the following steps: pz 1 Is directed to a special control point (i.e. the image feature of the above specific image location point) Py 1 Three-dimensional coordinates of (Pz) 2 Is directed to a special control point Py 2 Three-dimensional coordinates of (Pz) 3 Is directed to a special control point Py 3 By analogy with three-dimensional coordinates of (1), pz N Is aimed at the live-action image Pz N Three-dimensional coordinates of the extracted image features.
If the coordinate system of the constructed three-dimensional visual map is an x-y-z coordinate system, the special three-dimensional coordinate set Pz formed under the special control point set Py can be expressed as { (x-y-z) i ,y i ,z i )|i∈[1,N]},x i Is the coordinate, y, of the ith special control point in the x-axis direction of the three-dimensional visual map coordinate system i Is the coordinate of the ith special control point in the y-axis direction of the three-dimensional visual map coordinate system, z i And the coordinate of the ith special control point in the z-axis direction of the three-dimensional visual map coordinate system. That is Pz 1 =(x 1 ,y 1 ,z 1 )、Pz 2 =(x 2 ,y 2 ,z 2 )、Pz 3 =(x 3 ,y 3 ,z 3 )、…、Pz N =(x N ,y N ,z N )。
With continuing reference to fig. 2, the visual map construction method of the present embodiment further includes the following steps:
step S102, a two-dimensional plane map of the closed environment is obtained, and a special two-dimensional coordinate set of the special control point set corresponding to the two-dimensional plane map is obtained.
In step S102, an environment is enclosedThe two-dimensional plane map source of (a) may be a CAD design drawing of a building, or a plane drawing provided by a graphics vendor (e.g., hundredths, four-dimensional map novels, etc.). Therefore, the two-dimensional plane map of the closed environment can be directly obtained through the existing map acquisition. Generally, a two-dimensional plane map includes a plane image feature set of a closed environment and a two-dimensional plane coordinate set corresponding to the plane image feature. The planar image feature set may be Rp = { Rp = [) k |k∈[1,L]The two-dimensional plane coordinate set may be Rx = { Rx = } k |k∈[1,L]L is a natural number greater than 1, and typically L is also at least several thousand in order. Generally, the planar image feature set Rp is a series of position point image features of the existing planar map, and the two-dimensional planar coordinate set Rx is a two-dimensional coordinate set in a two-dimensional planar coordinate system corresponding to the position point image features of the planar map. That is, the planar image feature point Rp 1 Corresponding two-dimensional coordinates Rx 1 Plane image feature point Rp 2 Corresponding two-dimensional coordinates Rx 2 Plane image feature point Rp 3 Corresponding two-dimensional coordinates Rx 3 8230and so on, the characteristic point Rp of the plane image L Corresponding two-dimensional coordinates Rx L
The special control point set is composed of a series of points at the specific image position pre-selected in the closed environment in advance. According to the selected points at the specific image positions, corresponding special position points can be directly and manually searched on the two-dimensional plane map, and the two-dimensional coordinates of the special position points in the coordinate system of the two-dimensional plane map are obtained to form the special two-dimensional coordinate set.
In other embodiments, the two-dimensional plane map has a corresponding two-dimensional image feature set, the image features of the special control point set may be matched with the two-dimensional image features, the matched two-dimensional image features and corresponding two-dimensional coordinates are output, and the two-dimensional coordinates form the special two-dimensional coordinate set.
Setting a two-dimensional plane map coordinate system as a u-v coordinate system, and setting a special control point set Py = { Py = } i |i∈[1,N]}, special three-dimensional coordinate set Pz = { Pz = i |i∈[1,N]}, special two-dimensional coordinate set Pu = { Pu = } i I ∈ [1, N }, pu can also be expressed as { (u) i ,v i )|i∈[1,N]},u i Is the coordinate, v, of the ith special control point in the u-axis direction of the two-dimensional plane map coordinate system i The coordinate of the ith special control point in the v-axis direction of the two-dimensional plane map coordinate system; namely: pu (Pu) powder 1 =(u 1 ,v 1 )、Pu 2 =(u 2 ,v 2 )、Pu 3 =(u 3 ,v 3 )、…、Pu N =(u N ,v N )。
With continued reference to fig. 2, the visual map construction method of the present embodiment further includes the following steps:
step S103, calculating a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set to establish a coordinate element corresponding relation between the three-dimensional visual map and the two-dimensional plane map.
Step S103 is to obtain a corresponding relationship between the three-dimensional visual map established in the closed environment and the coordinate elements on the two-dimensional planar map by calculating a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set.
Specifically, the method comprises the following steps: let the special control point set Py = { Py = { Py i |i∈[1,N]}, special three-dimensional coordinate set Pz = { Pz = i |i∈[1,N]Is prepared from { (x) i ,y i ,z i )|i∈[1,N]Denotes { Pz } i |i∈[1,N]Coordinates under a three-dimensional visual map coordinate system x-y-z, and a special two-dimensional coordinate set Pu = { Pu } i I belongs to [1, N ], using { (u) } i ,v i )|i∈[1,N]Denotes { Pu } i I belongs to [1, N ] in a two-dimensional plane map coordinate system u-v.
Setting the z-axis direction of the three-dimensional visual map coordinate system as the height direction of the three-dimensional visual map, R as a rotation matrix, T as a translation vector, c as a scaling coefficient, T as the transformation matrix, and T consisting of R, T and c, and obtaining a T value based on the formula (1) and the formula (2):
the formula (1) is:
Figure BDA0003858392530000161
the equation (2) is:
Figure BDA0003858392530000162
e is
Figure BDA0003858392530000163
The calculated value of (a).
e is an error, so that R, T and c with the minimum error e value are the result to be solved, and the most appropriate T value can be obtained by solving the minimization of e value to obtain the corresponding numerical values of R, T and c.
With continuing reference to fig. 2, the visual map construction method of the present embodiment further includes the following steps:
and step S104, forming a visual map file based on the three-dimensional visual map, the two-dimensional plane map and the corresponding relation of the coordinate elements between the three-dimensional visual map and the two-dimensional plane map.
Based on the above, the three-dimensional visual map includes: the three-dimensional image feature set and a three-dimensional point cloud coordinate set corresponding to the three-dimensional image feature set; the two-dimensional plane map includes: the three-dimensional visual map and the two-dimensional plane map are obtained through a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set.
Let three-dimensional image feature set Sp = { Sp = { (Sp) } j |j∈[1,M]}, three-dimensional point cloud coordinate set Sx = { Sx = j |j∈[1,M]}, special control point set Py = { Py = { Py } i |i∈[1,N]}, special three-dimensional coordinate set Pz = { Pz = i |i∈[1,N]}, special two-dimensional coordinate set Pu = { Pu = } i I belongs to [1, N ], and the plane image feature set can be Rp = { Rp = k |k∈[1,L]The two-dimensional plane coordinate set may be Rx = { Rx = } k |k∈[1,L]H, transform the matrix to T, then the visual map file includes: the three-dimensional image feature set Sp and a corresponding three-dimensional point cloud coordinate set Sx, the plane image feature set Rp and a two-dimensional plane coordinate set Rx, and the corresponding relation T of coordinate elements between the three-dimensional visual map and the two-dimensional plane map.
Based on the visual map construction method described in fig. 2, fig. 3 proposes a visual map positioning method suitable for a closed environment by using the constructed visual map file, including:
and step S200, acquiring a device observation image.
The use scene of the positioning function is generally used for a user to take an image of a current closed scene by using a device (such as a camera, a mobile phone, etc.), and an observation image Po is formed.
Step S201, extracting an observation image characteristic set of the observation image.
The observation image feature set is composed of image features extracted from the observation image Po, and the manner of extracting image features such as color features, texture features, shape features and the like belongs to the prior art and is not described herein again. The set of observed image features may be denoted by Ps.
Step S202, carrying out image feature matching on the observation image feature set and the three-dimensional image feature set to obtain a three-dimensional image matching feature set.
The Euclidean distance between each image feature in the observed image feature set and each image feature corresponding to the image frame in the three-dimensional image feature set can be compared by adopting a direct comparison method, so that the proper image features are matched. A generally euclidean distance within a predetermined distance may be considered suitable.
The image features in Ps can be respectively compared with the image feature set Sp in the three-dimensional image feature set Sp 1 ,Sp 2 ,Sp 3 ,…,Sp (M-1) ,Sp M And comparing, and matching to obtain a three-dimensional image matching feature set Sp'. Specifically, at Sp = { Sp j |j∈[1,M]In the method, the ith three-dimensional image feature set Sp is set i Image feature matching with Ps if Sp i Are matched with the image features in Ps, these matched image features are recorded in Sp', if Sp i Is not present, there is a matching image feature to the image feature in Ps, then a zero or other non-matching marker is recorded in the corresponding Sp'. The set of three-dimensional image matching features output in this step is Sp'.
Step S203, extracting a three-dimensional point cloud matching coordinate set corresponding to the three-dimensional image matching feature set from the three-dimensional point cloud coordinate set.
Because the image features in the three-dimensional image feature set Sp correspond to the three-dimensional point cloud coordinates in the three-dimensional point cloud coordinate set Sx, through the corresponding relationship, the three-dimensional point cloud coordinates corresponding to the image features matched in Sp ' in the three-dimensional point cloud coordinate set Sx can be obtained based on the three-dimensional image matching feature set Sp ', and the coordinates are extracted as the three-dimensional point cloud matching coordinate set Sx '.
And S204, solving the pose of the equipment based on the three-dimensional image matching feature set and the three-dimensional point cloud matching coordinate set.
In step S204, a PNP (passive-n-Point) algorithm may be specifically adopted to obtain the pose of the shooting device. The PNP algorithm is a problem of estimating the pose of the calibration camera given n 3D points on the world reference frame and their corresponding 2D projections in the image. The PNP algorithm can be used to find the pose of the shooting device based on the image features (corresponding to 2D projections) of the three-dimensional image matching feature set Sp 'and the three-dimensional coordinates (relative to 3D points) of the three-dimensional point cloud matching coordinate set Sx'. As the PNP algorithm is widely applied to the prior technical schemes of camera pose tracking, object pose tracking, AR/VR, robot operation, initial value solution of pose in SLAM and the like, the common solutions comprise a DLT algorithm, a P3P algorithm, an EPnP algorithm, a UPnP algorithm and the like, and belong to very basic algorithm schemes in all three-dimensional vision fields, and the technical scheme of the invention is not repeated.
Based on the visual map construction method described in fig. 2 and the visual map positioning method described in fig. 3, fig. 4 further provides a visual map navigation method suitable for a closed environment, including:
and step S300, solving the pose of the equipment to form a real-time three-dimensional coordinate set of the equipment in the three-dimensional visual map.
The current device pose Fo is obtained by the visual map positioning method as shown in fig. 3, fo is a real-time three-dimensional coordinate formed based on a three-dimensional visual map coordinate system x-y-z, fo = (x =) o ,y o ,z o ) And o is the serial number of the real-time pose. When the user equipment moves in a closed environment, according to the visual map as shown in fig. 3And the positioning method is used for forming a real-time three-dimensional coordinate set P of the equipment in the three-dimensional visual map in real time.
If the device is currently on navigation for the first time or is just on real-time positioning, then there is only one real-time three-dimensional coordinate Fo in the current real-time three-dimensional coordinate set P. If the device has been turned on for navigation or has been positioned in real time over a period of time, then there are several real-time three-dimensional coordinates Fo in the current set of real-time three-dimensional coordinates P 1 To Fo W And W is a natural number greater than 1.
That is, P is Fo, or P is { Fo h |h∈[1,W]}。
Step S301, aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by using the coordinate element corresponding relation to obtain a real-time two-dimensional coordinate set.
Setting the transformation matrix as T, the real-time three-dimensional coordinate set as P, and the real-time two-dimensional coordinate set as Q, aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by using the coordinate element correspondence in step S301 to obtain a real-time two-dimensional coordinate set includes:
obtaining a real-time two-dimensional coordinate set Q based on equation (3):
the formula (3) is: q = T · P.
And transforming the three-dimensional coordinates on the three-dimensional visual map coordinate system x-y-z in the real-time three-dimensional coordinate set P into two-dimensional coordinates in a two-dimensional plane map coordinate system u-v by transforming the matrix into T. Taking the current pose Fo of the device as an example, the three-dimensional coordinate of the device is (x) o ,y o ,z o ) Transformed to Fo' and having a two-dimensional coordinate of (u) o ,v o )。
When the real-time three-dimensional coordinate set P is Fo, the real-time two-dimensional coordinate set Q is Fo'.
When the real-time three-dimensional coordinate set P is { Fo h |h∈[1,W]}, the real-time two-dimensional coordinate set Q is { Fo' h |h∈[1,W]}。
Step S302, a target navigation position is obtained.
The target navigation position is a position point on a two-dimensional plane coordinate system selected by the user equipment, if the target navigation position is Fd, the target navigation position isThe two-dimensional coordinate of the navigation position is (u) d ,v d ) And d is the serial number of the target navigation position.
And step S303, carrying out path planning through the real-time two-dimensional coordinate set and the target navigation position to form a corresponding navigation two-dimensional coordinate set.
Because the real-time two-dimensional coordinate set coordinates and the target navigation position coordinates are a series of coordinate points on the two-dimensional plane map, a series of two-dimensional coordinate point sets on the preset route can be output according to the preset route of navigation, and the two-dimensional coordinate point sets are composed of the coordinates of a plurality of two-dimensional plane coordinate systems on the preset route. The number of coordinates on the preset route may be set to at least several thousands according to the system preset number.
Corresponding to the primary navigation or the primary positioning navigation, P is Fo, the real-time two-dimensional coordinate set Q is Fo ', the target navigation position is Fd, and according to the navigation preset route, the navigation two-dimensional coordinate set f on the navigation route between Fo' and the target navigation position Fd can be obtained, wherein f = { f = } f g |g∈[1,F]},f g For the g-th two-dimensional coordinate (u) on the navigation route g ,v g ) G is a number, F is a natural number greater than 1, and generally F is at least several thousand in order of magnitude.
Corresponding to the data processing of the navigated or positioned navigated for a period of time, the real-time three-dimensional coordinate set P is { Fo } h |h∈[1,W]}, the real-time two-dimensional coordinate set Q is { Fo' h |h∈[1,W]And f, obtaining Fo 'according to a navigation preset route, wherein the target navigation position is Fd' W And a navigation two-dimensional coordinate set f ', f' = { f 'on the navigation route between the target navigation position Fd and the navigation device' g |g∈[1,F]},f’ g For the g-th two-dimensional coordinate (u) on the navigation route g ,v g ) G is a number, F is a natural number greater than 1, and generally F is at least several thousand in order of magnitude.
In other embodiments, the path planning may also be performed based on the real-time two-dimensional coordinate position and the target navigation position related to the real-time two-dimensional coordinate set as needed to form a corresponding navigation two-dimensional coordinate set. The associated real-time two-dimensional coordinate position is a coordinate position selected by a user from a planar trajectory displayed on the real-time device based on the real-time two-dimensional coordinate set data.
And step S304, forming a navigation path by using the navigation two-dimensional coordinate set.
Since the navigation two-dimensional coordinate set f (or f', here exemplified by f) is derived from the two-dimensional plane coordinate set Rx, the two-dimensional plane coordinate set Rx corresponds to the plane image feature set Rp. Therefore, the plane image feature corresponding to the plane image feature set Rp can be obtained for the navigation two-dimensional coordinate set f. I.e. navigation two-dimensional coordinate set f = { f = { f } g |g∈[1,F]Get the navigation plane image feature set Rx '= { Rx' g |g∈[1,F]. The navigation route can thus be formed in the two-dimensional map displayed by the user device on the basis of the navigation plane image feature set Rx'.
In a variation of the visual map navigation method shown in fig. 5, based on the method steps composed of steps S300 to S304 shown in fig. 4, the method further includes:
and S400, forming a plane track of the equipment based on the real-time two-dimensional coordinate set.
In step S401, if the current device plane trajectory and the current navigation path generate a first offset and the first offset exceeds a first preset offset, step S402 is executed.
In step S402, if the current device plane trajectory and the device plane trajectory formed last time generate a second offset and the second offset exceeds a second preset offset, steps S403 to S404 are performed.
And step S403, solving the current pose of the equipment again.
And S404, updating the real-time three-dimensional coordinate set, the real-time two-dimensional coordinate set and the plane track of the equipment according to the current pose of the equipment.
In step S405, if the current device plane trajectory and the last device plane trajectory generate the second offset but the second offset does not exceed the second preset offset, step S406 is executed.
And step S406, correcting the path plan through the current real-time two-dimensional coordinate set and the target navigation position so as to update the corresponding navigation two-dimensional coordinate set.
In navigation, a positioning error occurs in a closed environment such as an indoor environment. Therefore, in this embodiment, a technical solution capable of correcting the navigation deviation is also provided.
When the equipment uses the navigation function, the real-time three-dimensional visual positioning can be carried out, and the positioning result of each time is recorded, so that a three-dimensional space track is obtained. This trajectory can be transformed into a trajectory on a two-dimensional plane according to a transformation relation. And comparing the track with the planned path, and judging whether the deviation between the track and the path exists or not. If no offset exists, normal navigation continues. If there is a large offset, it is necessary to determine whether it is caused by a positioning error. For example, a position difference and an angle difference may be calculated between the current positioning position and the last positioning position, and if the position difference is too large or the angle difference is too large, it is determined that an error has occurred in the positioning. At this time, the system deletes the result of the visual positioning and performs new positioning again. If the positioning has no problem, and the user track deviates from the planned path at the moment, it indicates that the user may not completely walk according to the planned path, and at this time, a new navigation path is calculated by taking the current positioning result as a starting point to perform continuous navigation.
Based on the navigation method shown in fig. 4, the real-time two-dimensional coordinate set obtained in the navigation process is used to confirm the device plane track in step S400, specifically, the real-time two-dimensional coordinate in the real-time two-dimensional coordinate set may be further compared with the navigation two-dimensional coordinate, and after the determination in step S401, it is found that there is a deviation between the two-dimensional coordinate of the current device in the real-time two-dimensional coordinate set representing the current device plane track and the current navigation two-dimensional coordinate, and the deviation exceeds the first preset deviation, then the two-dimensional coordinate of the current device in the real-time two-dimensional coordinate set is further compared with the two-dimensional coordinate of the device formed last time:
after the determination in step S402, it is found that the two-dimensional coordinate of the current device representing the plane trajectory of the current device deviates from the two-dimensional coordinate of the previous device, and the deviation exceeds the second preset deviation, which indicates that the current positioning (the current device pose) of the device is incorrect, and it is necessary to obtain the current pose of the device again according to step S403 and update the real-time three-dimensional coordinate set, the real-time two-dimensional coordinate set, and the plane trajectory displayed by the device according to step S404 based on the current pose of the device.
If it is found that the two-dimensional coordinate of the current device representing the planar track of the current device does not deviate from the two-dimensional coordinate of the previous device after the determination in the step S402, or the deviation does not exceed the second preset deviation although the deviation exists, it indicates that the current positioning (the current device pose) of the device does not have an error, and at this time, it is considered that the two-dimensional coordinate of the current device representing the device exists a deviation from the current navigation two-dimensional coordinate in the step S401, because the user does not walk in a closed environment according to the navigation path, and the user deviates from the path planned by the navigation, so that the step S406 needs to be executed, that is, the navigation path plan is corrected according to the real-time two-dimensional coordinate set of the user and the current navigation position, that is, the navigation is performed again. The manner of re-navigation may refer to steps S303 and S304.
Note that the calculation of the offset of the device plane trajectory from the navigation path in step S401 may be obtained by calculation of the coordinate offset between the real-time two-dimensional coordinate set representing the device plane trajectory and the navigation two-dimensional coordinate set representing the navigation path. Specifically, the coordinate position offset amount may be obtained by calculating a coordinate position offset amount or a coordinate angle offset amount.
Similarly, the offset of the current device plane trajectory from the previously formed device plane trajectory in step S402 may be obtained by a real-time two-dimensional coordinate set, specifically, calculating the offset of the current device two-dimensional coordinate from the previous device two-dimensional coordinate. The coordinate position offset amount may also be obtained by calculating a coordinate position offset amount or a coordinate angle offset amount.
Based on the above method of the present embodiment, fig. 6 illustrates a visual map building system suitable for a closed environment of the present embodiment, including: a processor 10 and a memory 11. In which a computer program is stored in the memory 11, which when executed by the processor 10, performs the steps of the visual map construction method as described in steps S100 to S104.
Based on the above method of the present embodiment, fig. 7 illustrates a visual map positioning system suitable for a closed environment of the present embodiment, including: a processor 20 and a memory 21. The memory 21 stores a computer program, and the processor 20 executes the computer program to execute the steps of the visual map positioning method described in steps S200 to S204.
Based on the above method of the present embodiment, fig. 8 illustrates a visual map navigation system suitable for a closed environment of the present embodiment, including: a processor 30 and a memory 31. The memory 31 stores therein a computer program, and the processor 30 executes the steps of the visual map positioning method described in steps S300 to S304 when executing the computer program. In other embodiments, the processor 30 may also execute the steps of the visual map positioning method as described in steps S300 to S304, and S400 to S406 when executing the computer program.
Based on the above method of the present embodiment, fig. 9 illustrates another visual mapping system suitable for a closed environment of the present embodiment, which includes: the first acquiring unit 40, the three-dimensional map constructing unit 41, the two-dimensional map constructing unit 42, the calculating unit 43 and the first output unit 44 are adapted to perform steps S100 to S104 in sequence.
Based on the foregoing method of the present embodiment, fig. 10 illustrates another visual map positioning system suitable for a closed environment of the present embodiment, which includes: the second obtaining unit 50, the first extracting unit 51, the matching unit 52, the second extracting unit 53 and the second outputting unit 54 are adapted to sequentially perform steps S200 to S204.
Based on the above method of the present embodiment, fig. 11 illustrates another visual map navigation system suitable for a closed environment of the present embodiment, which includes: the positioning unit 60, the converting unit 61, the third obtaining unit 62, the path planning unit 63, and the third outputting unit 64 are adapted to sequentially execute steps S300 to S304.
Based on the visual map navigation system for closed environment of fig. 11, in other embodiments, the visual map navigation system comprises in addition: the positioning unit 60, the converting unit 61, the third acquiring unit 62, the path planning unit 63, and the third outputting unit 63 may further include: the trajectory unit 70, the first determination unit 71, the second determination unit 72, the update unit 73, and the correction unit 74 are as shown in fig. 12. The trajectory unit 70 is adapted to perform step S400, the first judging unit 71 is adapted to perform step S401, the second judging unit 72 is adapted to perform step S402, the positioning unit 60 is further adapted to perform step S403, the updating unit 73 is adapted to perform step S404, the second judging unit 72 is further adapted to perform step S405, and the modifying unit 74 is adapted to perform step S406. The first determining unit 71 is communicatively connected to the second determining unit 72, the second determining unit 72 is communicatively connected to the positioning unit 60 and the correcting unit 74, respectively, and the positioning unit 60 is further communicatively connected to the updating unit 73.
Based on the above method of the present embodiment, the present embodiment also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the visual map construction method as recited in steps S100 to S104.
Based on the above method of the present embodiment, the present embodiment further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the visual map location method as recited in steps S200 to S204.
Based on the above method of the present embodiment, the present embodiment also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the visual map localization method as recited in steps S300 to S304. In other embodiments, the computer program may also implement the steps of the visual map positioning method as described in steps S300 to S304, and S400 to S406 when executed by the processor.
The foregoing description has described specific embodiments of the present invention. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (23)

1. A visual map construction method suitable for closed environment, characterized by comprising:
acquiring a live-action image set and a special control point set of a closed environment;
constructing a three-dimensional visual map based on the live-action image set, and obtaining a special three-dimensional coordinate set of the special control point set corresponding to the three-dimensional visual map; the three-dimensional visual map includes: the three-dimensional image feature set and a three-dimensional point cloud coordinate set corresponding to the three-dimensional image feature set;
acquiring a two-dimensional plane map of the closed environment, and acquiring a special two-dimensional coordinate set of the special control point set corresponding to the two-dimensional plane map; the two-dimensional visual map includes: the two-dimensional image coordinate set;
calculating a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set to establish a coordinate element corresponding relation between the three-dimensional visual map and the two-dimensional plane map;
and forming a visual map file based on the three-dimensional visual map, the two-dimensional plane map and the corresponding relation of the coordinate elements between the three-dimensional visual map and the two-dimensional plane map.
2. The visual map construction method of claim 1, wherein said obtaining a set of live-action images of an enclosed environment comprises: shooting a plurality of live-action images, and forming the live-action image set by using the live-action images.
3. The visual map construction method of claim 1, wherein said obtaining a set of live-action images of an enclosed environment comprises: shooting a live-action video, extracting a plurality of image frames from the live-action video, and forming the live-action image set by using the live-action image frames.
4. The visual map construction method of claim 1, wherein the set of special control points is formed by pre-selecting a particular location of the enclosed space.
5. The visual map construction method of claim 1, wherein constructing a three-dimensional visual map based on the set of live-action images comprises:
extracting a corresponding three-dimensional image feature set from the live-action image set;
and outputting a three-dimensional point cloud coordinate set corresponding to the three-dimensional image feature set and the three-dimensional visual map model by utilizing a motion recovery structure algorithm or a visual SLAM technology.
6. The visual map construction method of claim 1, wherein the set of special control points comprises: 1 st to Nth special control points, wherein N is a natural number greater than 1;
the special three-dimensional coordinate set includes: three-dimensional coordinates of the 1 st to Nth special control points in the three-dimensional visual map coordinate system, namely { (x) i ,y i ,z i )|i∈[1,N]};x i Is the coordinate of the ith special control point in the x-axis direction of the three-dimensional visual map coordinate system, y i Is the coordinate of the ith special control point in the y-axis direction of the three-dimensional visual map coordinate system, z i The coordinate of the ith special control point in the z-axis direction of the three-dimensional visual map coordinate system is shown;
the special two-dimensional coordinate set includes: two-dimensional coordinates of the 1 st to Nth special control points in the two-dimensional plane map coordinate system, namely { (u) i ,v i )|i∈[1,N]};u i Is the coordinate, v, of the ith special control point in the u-axis direction of the two-dimensional plane map coordinate system i The coordinate of the ith special control point in the v-axis direction of the two-dimensional plane map coordinate system;
the calculating a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set comprises:
setting the z-axis direction of the three-dimensional visual map coordinate system as the height direction of the three-dimensional visual map, R as a rotation matrix, T as a translation vector, c as a scaling coefficient, and T as the transformation matrix, and obtaining a T value based on the formula (1) and the formula (2):
the formula (1) is:
Figure FDA0003858392520000021
the equation (2) is:
Figure FDA0003858392520000022
e is
Figure FDA0003858392520000023
The calculated value of (a).
7. The visual map construction method of claim 6, wherein the deriving the T value based on equations (1) and (2) comprises: solving the minimization of the e value to obtain a T value.
8. A visual map positioning method suitable for closed environment, based on the visual map constructed by the method of any one of claims 1 to 7, comprising:
acquiring an equipment observation image;
extracting an observation image feature set of the observation image;
carrying out image feature matching on the observation image feature set and the three-dimensional image feature set to obtain a three-dimensional image matching feature set;
extracting a three-dimensional point cloud matching coordinate set corresponding to the three-dimensional image matching feature set from the three-dimensional point cloud coordinate set;
and solving the equipment pose based on the three-dimensional image matching feature set and the three-dimensional point cloud matching coordinate set.
9. A visual map navigation method suitable for closed environment, based on the visual map constructed by the method according to any one of claims 1 to 7, characterized by comprising:
solving the pose of the device based on the method according to claim 8, and forming a real-time three-dimensional coordinate set of the device in the three-dimensional visual map;
aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by using the corresponding relation of the coordinate elements to obtain a real-time two-dimensional coordinate set;
acquiring a target navigation position;
performing path planning through the real-time two-dimensional coordinate set and the target navigation position to form a corresponding navigation two-dimensional coordinate set;
and forming a navigation path by using the navigation two-dimensional coordinate set.
10. The visual map navigation method of claim 9, further comprising:
forming a device plane track based on the real-time two-dimensional coordinate set;
if the current device plane track and the current navigation path generate a first offset and the first offset exceeds a first preset offset, then:
if the current equipment plane track and the equipment plane track formed last time generate a second offset and the second offset exceeds a second preset offset, then:
solving the current pose of the equipment again;
updating the real-time three-dimensional coordinate set, the real-time two-dimensional coordinate set and the equipment plane track according to the current pose of the equipment;
if the current device plane trajectory and the last device plane trajectory generate a second offset but the second offset does not exceed the second preset offset, then:
and correcting the path plan through the current real-time two-dimensional coordinate set and the target navigation position so as to update the corresponding navigation two-dimensional coordinate set.
11. The visual map navigation method of claim 9, wherein the real-time three-dimensional coordinate set comprises: real-time three-dimensional coordinates of the 1 st or 1 st to Wth equipment are formed during navigation or positioning, wherein W is a natural number greater than 1;
the real-time two-dimensional coordinate set comprises: forming a 1 st or 1 st to Wth equipment real-time two-dimensional coordinate during navigation;
the navigating the two-dimensional coordinate set comprises: f th 1 To f F A navigation two-dimensional coordinate, F is a natural number greater than 1, F 1 The navigation two-dimensional coordinate is the current real-time two-dimensional coordinate during path planning, f F And the navigation two-dimensional coordinate is the target navigation position.
12. The visual map navigation method of claim 11, further comprising:
comparing the current real-time two-dimensional coordinate with the current navigation two-dimensional coordinate;
if the current real-time two-dimensional coordinate and the current navigation two-dimensional coordinate generate a third coordinate offset and the third coordinate offset exceeds a third preset coordinate offset, then:
if the current real-time two-dimensional coordinate and the last formed real-time two-dimensional coordinate generate a fourth coordinate offset and the fourth coordinate offset exceeds a fourth preset coordinate offset, then:
solving the current pose of the equipment again;
updating the current real-time three-dimensional coordinate and the real-time two-dimensional coordinate according to the current pose of the equipment so as to update the real-time three-dimensional coordinate set and the real-time two-dimensional coordinate set;
if the current real-time two-dimensional coordinate and the last formed real-time two-dimensional coordinate generate a fourth coordinate offset and the fourth coordinate offset exceeds the fourth preset coordinate offset, then:
and correcting the path planning through the current real-time two-dimensional coordinate and the target navigation position so as to update a corresponding navigation two-dimensional coordinate set.
13. The visual map navigation method of claim 10 or 12, wherein the coordinate offset is calculated based on at least one of a coordinate position offset calculation and a coordinate angle offset calculation.
14. The visual map navigation method of claim 9, wherein said transformation matrix is set to T, said set of real-time three-dimensional coordinates is set to P, and said set of real-time two-dimensional coordinates is set to Q;
the aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by using the coordinate element correspondence to obtain a real-time two-dimensional coordinate set includes:
obtaining a real-time two-dimensional coordinate set Q based on equation (3):
the formula (3) is: q = T · P.
15. A visual mapping system adapted for use in an enclosed environment, comprising: a processor and a memory;
the memory has stored therein a computer program which, when executed by the processor, performs the steps of the visual map construction method of any of claims 1 to 7.
16. A visual map positioning system adapted for use in an enclosed environment, comprising: a processor and a memory;
the memory has stored therein a computer program which, when executed by the processor, performs the steps of the visual map location method of claim 8.
17. A visual map navigation system adapted for use in an enclosed environment, comprising: a processor and a memory;
the memory has stored therein a computer program which, when executed by the processor, performs the steps of the visual map navigation method of any one of claims 9 to 14.
18. A visual mapping system adapted for use in an enclosed environment, comprising:
the first acquisition unit is suitable for acquiring a live-action image set and a special control point set of a closed environment;
the three-dimensional map building unit is suitable for building a three-dimensional visual map based on the live-action image set and obtaining a special three-dimensional coordinate set of the special control point set corresponding to the three-dimensional visual map; the three-dimensional visual map includes: the system comprises a three-dimensional image characteristic set and a three-dimensional point cloud coordinate set corresponding to the three-dimensional image characteristic set;
the two-dimensional map construction unit is suitable for acquiring a two-dimensional plane map of the closed environment and acquiring a special two-dimensional coordinate set of the special control point set corresponding to the two-dimensional plane map; the two-dimensional visual map includes: the two-dimensional image coordinate set;
the computing unit is suitable for computing a transformation matrix between the special three-dimensional coordinate set and the special two-dimensional coordinate set so as to establish a coordinate element corresponding relation between the three-dimensional point cloud coordinate set and the two-dimensional image coordinate set;
and the first output unit is suitable for forming a visual map file based on the three-dimensional visual map, the two-dimensional plane map and the corresponding relation of the coordinate elements between the three-dimensional point cloud coordinate set and the two-dimensional image coordinate set.
19. A visual mapping system adapted for use in a closed environment, based on the visual mapping system of claim 18, comprising:
the second acquisition unit is suitable for acquiring an equipment observation image;
the first extraction unit is suitable for extracting an observation image characteristic set of the observation image;
the matching unit is suitable for performing image feature matching on the observation image feature set and the three-dimensional image feature set to obtain a three-dimensional image matching feature set;
the second extraction unit is suitable for extracting a three-dimensional point cloud matching coordinate set corresponding to the three-dimensional image matching feature set from the three-dimensional point cloud coordinate set;
and the second output unit is suitable for solving the equipment pose based on the three-dimensional image matching feature set and the three-dimensional point cloud matching coordinate set.
20. A visual map navigation system adapted for use in a closed environment, based on the visual map construction system of claim 18, comprising:
a positioning unit adapted to form a real-time three-dimensional coordinate set of a device in the three-dimensional visual map using the visual map positioning system of claim 19;
the conversion unit is suitable for aligning the real-time three-dimensional coordinate set to the two-dimensional plane map by utilizing the corresponding relation of the coordinate elements so as to obtain a real-time two-dimensional coordinate set;
the third acquisition unit is suitable for acquiring a target navigation position;
the path planning unit is suitable for planning a path through the real-time two-dimensional coordinate set and the target navigation position to form a corresponding navigation two-dimensional coordinate set;
and the third output unit is suitable for forming a navigation path by utilizing the navigation two-dimensional coordinate set.
21. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the visual map construction method according to any one of claims 1 to 7.
22. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the visual map localization method according to claim 8.
23. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the visual map navigation method according to any one of claims 9 to 14.
CN202211158521.0A 2022-09-22 2022-09-22 Visual map construction, positioning and navigation method and system, and computer storage medium Pending CN115526997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211158521.0A CN115526997A (en) 2022-09-22 2022-09-22 Visual map construction, positioning and navigation method and system, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211158521.0A CN115526997A (en) 2022-09-22 2022-09-22 Visual map construction, positioning and navigation method and system, and computer storage medium

Publications (1)

Publication Number Publication Date
CN115526997A true CN115526997A (en) 2022-12-27

Family

ID=84700120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211158521.0A Pending CN115526997A (en) 2022-09-22 2022-09-22 Visual map construction, positioning and navigation method and system, and computer storage medium

Country Status (1)

Country Link
CN (1) CN115526997A (en)

Similar Documents

Publication Publication Date Title
CN109087359B (en) Pose determination method, pose determination apparatus, medium, and computing device
US11734846B2 (en) System and method for concurrent odometry and mapping
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US11210804B2 (en) Methods, devices and computer program products for global bundle adjustment of 3D images
JP2020030204A (en) Distance measurement method, program, distance measurement system and movable object
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN112556685B (en) Navigation route display method and device, storage medium and electronic equipment
KR101885961B1 (en) Method of estimating the location of object image-based and apparatus therefor
CA3161560A1 (en) 3-d reconstruction using augmented reality frameworks
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN113808269A (en) Map generation method, positioning method, system and computer readable storage medium
CN115526997A (en) Visual map construction, positioning and navigation method and system, and computer storage medium
CN114494612A (en) Method, device and equipment for constructing point cloud map
CA3102860C (en) Photography-based 3d modeling system and method, and automatic 3d modeling apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination