CN110457414B - Offline map processing and virtual object display method, device, medium and equipment - Google Patents

Offline map processing and virtual object display method, device, medium and equipment Download PDF

Info

Publication number
CN110457414B
CN110457414B CN201910693978.3A CN201910693978A CN110457414B CN 110457414 B CN110457414 B CN 110457414B CN 201910693978 A CN201910693978 A CN 201910693978A CN 110457414 B CN110457414 B CN 110457414B
Authority
CN
China
Prior art keywords
feature extraction
pose information
offline map
virtual object
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910693978.3A
Other languages
Chinese (zh)
Other versions
CN110457414A (en
Inventor
黄锋华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910693978.3A priority Critical patent/CN110457414B/en
Publication of CN110457414A publication Critical patent/CN110457414A/en
Application granted granted Critical
Publication of CN110457414B publication Critical patent/CN110457414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an offline map processing method, a virtual object display method, an offline map processing device, a virtual object display device, a storage medium and electronic equipment, and relates to the technical field of augmented reality. The virtual object display method comprises the following steps: acquiring a current frame image acquired by a target terminal, determining pose information of the current frame image as first pose information, and extracting various types of characteristic information from the current frame image; determining pose information of the current frame image relative to an offline map coordinate system based on the various types of characteristic information, wherein the pose information is used as second pose information; determining the relative position relation between the target terminal and the drawing building terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building an offline map; and displaying the virtual object configured in the offline map on the target terminal based on the relative position relation between the target terminal and the mapping terminal. The present disclosure may improve the robustness of relocation.

Description

Offline map processing and virtual object display method, device, medium and equipment
Technical Field
The present disclosure relates to the field of augmented reality technology, and in particular, to an offline map processing method, a virtual object display method, an offline map processing apparatus, a virtual object display apparatus, a storage medium, and an electronic device.
Background
Augmented reality (Augmented Reality, AR) is a technology that merges virtual world and real world, and has been widely applied to various fields of education, games, medical treatment, internet of things, intelligent manufacturing, and the like.
In a multi-person AR scheme, the repositioning effect plays a crucial role in the AR experience. However, at present, the relocation effect has a problem of poor robustness.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure aims to provide an offline map processing method, a virtual object display method, an offline map processing apparatus, a virtual object display apparatus, a storage medium, and an electronic device, and further to overcome the problem of poor repositioning robustness at least to some extent.
According to a first aspect of the present disclosure, there is provided an offline map processing method, including: acquiring a scene image acquired by a mapping terminal and pose information corresponding to the scene image; extracting various types of feature information from the scene image; and constructing an offline map of the scene by using pose information corresponding to the scene image and various types of characteristic information of the scene image.
According to a second aspect of the present disclosure, there is provided a virtual object display method including: acquiring a current frame image acquired by a target terminal, determining pose information of the current frame image as first pose information, and extracting various types of characteristic information from the current frame image; determining pose information of the current frame image relative to an offline map coordinate system based on the various types of characteristic information, wherein the pose information is used as second pose information; determining the relative position relation between the target terminal and the drawing building terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building an offline map; and displaying the virtual object configured in the offline map on the target terminal based on the relative position relation between the target terminal and the mapping terminal.
According to a third aspect of the present disclosure, there is provided an offline map processing apparatus comprising: the image construction information acquisition module is used for acquiring the scene image acquired by the image construction terminal and pose information corresponding to the scene image; the image construction feature extraction module is used for extracting various types of feature information from the scene image; and the offline map construction module is used for constructing an offline map of the scene by using pose information corresponding to the scene image and various types of characteristic information of the scene image.
According to a fourth aspect of the present disclosure, there is provided a virtual object display apparatus including: the information acquisition module is used for acquiring a current frame image acquired by the target terminal, determining pose information of the current frame image as first pose information, and extracting various types of characteristic information from the current frame image; the pose determining module is used for determining pose information of the current frame image relative to an offline map coordinate system based on various types of characteristic information and taking the pose information as second pose information; the position relation determining module is used for determining the relative position relation between the target terminal and the drawing building terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building an offline map; and the virtual object display module is used for displaying the virtual object configured in the offline map on the target terminal based on the relative position relation between the target terminal and the mapping terminal.
According to a fifth aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described offline map processing method, or implements the above-described virtual object display method.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described offline map processing method or the above-described virtual object display method via execution of the executable instructions.
In the technical schemes provided by some embodiments of the present disclosure, an offline map is constructed by extracting multiple types of feature information, in the actual virtual object display process of a target terminal, multiple types of feature information of an image are extracted, the rich feature information is compared with the offline map, the relative positional relationship between the target terminal and the mapping terminal is determined, and the virtual object is displayed on the target terminal by using the relative positional relationship. Compared with a scheme of extracting only one type of feature, the method and the device provide multiple types of feature information, so that the richness of feature types is greatly increased, the method and the device are more relevant to an actual scene in the feature analysis of repositioning, the robustness of repositioning is improved, and AR experience is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 schematically illustrates a flow chart of an offline map processing method according to an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a virtual object display method according to an exemplary embodiment of the present disclosure;
fig. 3 schematically illustrates a structural diagram implementing an offline map processing method and a virtual object display method according to an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a block diagram of an offline map processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an offline map processing apparatus according to another exemplary embodiment of the present disclosure;
fig. 6 schematically illustrates a block diagram of a virtual object display apparatus according to an exemplary embodiment of the present disclosure;
fig. 7 schematically illustrates a block diagram of a virtual object display apparatus according to another exemplary embodiment of the present disclosure;
fig. 8 schematically illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations. The terms "first," "second," are used for distinguishing purposes only and should not be taken as a limitation of the present disclosure.
In a multi-person AR scheme, the repositioning effect plays a crucial role in the AR experience. The main factors influencing the repositioning effect comprise illumination conditions of surrounding reality environment during the mapping, the richness of textures, the pose of key frames for mapping, moving objects in a scene and the like. Due to the influence of the factors, when illumination of an environment where a terminal is located changes or the pose of a scanned surrounding environment is greatly different from the drawing construction time difference, repositioning may fail or repositioning results may be inaccurate, so that when a virtual object configured during drawing construction is displayed on the terminal, the position is deviated, and multi-person AR experience is affected.
In view of the problem of poor robustness of repositioning effects in the prior art, the exemplary embodiments of the present disclosure propose a new offline map processing method and a new virtual object display method.
The offline map processing method and/or the virtual object display method of the exemplary embodiments of the present disclosure may be implemented by a server, that is, the steps of the offline map processing method and/or the steps of the virtual object display method may be performed by the server. In this case, an offline map processing apparatus and/or a virtual object display apparatus described below may be configured in the server. In addition, the server may be a cloud server.
In addition, the mapping terminal and the target terminal described below may include, for example, a mobile phone, a tablet computer, an intelligent wearable device, etc., which are not particularly limited in the present exemplary embodiment.
Fig. 1 schematically illustrates a flowchart of a virtual object display method of an exemplary embodiment of the present disclosure. Referring to fig. 1, the virtual object display method may include the steps of:
s12, acquiring a scene image acquired by the image construction terminal and pose information corresponding to the scene image.
In an exemplary embodiment of the present disclosure, a mapping terminal may collect a plurality of video frame images for a scene, and inertial information corresponding to the images. The video frame image may be visual information acquired by a camera of the first device, and the inertial information may be acquired by means of an IMU (Inertial Measurement Unit ) device of the first device, which may include a gyroscope and an accelerometer, which may measure an angular velocity and an acceleration of the terminal device, respectively. Since the IMU device generally operates at a higher frequency than the camera captures images, inertial information of the corresponding frame can be estimated by means of IMU pre-integration. The IMU pre-integration is based on time integration, and inertial information such as positions, speeds, rotation angles and the like of the two corresponding images can be obtained.
Through the work of the imaging terminal camera and the IMU device, various video frame images and pose information can be obtained, wherein the pose information can comprise 6DOF (6Degrees Of Freedom) pose information.
In addition, the images and corresponding pose information may be extracted using currently existing AR engines (e.g., ARCore, ARKit, etc.).
The mapping terminal can send the acquired scene image and pose information corresponding to the scene image to the server.
S14, extracting various types of characteristic information from the scene image.
According to some embodiments of the present disclosure, multiple types of feature information may be extracted based on a combination of different feature extraction algorithms and feature descriptors.
First, the server may determine various types of feature extraction algorithms and various feature descriptors. The feature extraction algorithm may include, but is not limited to, FAST feature point detection algorithm, DOG feature point detection algorithm, harris feature point detection algorithm, SIFT feature point detection algorithm, SURF feature point detection algorithm, and the like. The feature descriptors may include, but are not limited to, BRIEF feature point descriptors, BRISK feature point descriptors, FREAK feature point descriptors, and the like.
Next, different feature extraction algorithms may be combined with feature descriptors to determine multiple sets of feature extraction patterns. For example, one set of feature extraction patterns may be FAST feature point detection algorithm and BRIEF feature point descriptor, and another set of feature extraction patterns may be DOG feature point detection algorithm and frak feature point descriptor. Based on the above description, it will be appreciated by those skilled in the art that there are differences in the types of feature points extracted by different feature extraction modes for one image, for example, one set of feature extraction modes is suitable for feature extraction of a strong texture scene, while another feature extraction mode is suitable for feature extraction of a weak texture scene.
Subsequently, a plurality of types of feature information may be extracted from the scene image based on the plurality of sets of feature extraction patterns.
In addition, in consideration of the number of feature points extracted by each feature extraction mode and the characteristics of feature distribution on an image, the exemplary aspects of the present disclosure are described taking a case in which a plurality of sets of feature extraction modes include a first feature mode and a second feature extraction mode, and the server may divide a scene image into a plurality of image blocks, for example, each of the image blocks is a rectangular block of the same size, and the present disclosure does not particularly limit the number of image blocks.
For each image block, firstly adopting a first feature extraction mode to perform feature extraction, and if the number of feature points extracted by adopting the first feature extraction mode is larger than a preset threshold value, not using a second feature extraction mode to perform feature extraction on the image block. The preset threshold may be related to the size of the image block and may be set manually, which is not particularly limited in the present disclosure. And if the number of the feature points extracted by the first feature extraction mode is not smaller than the preset threshold value, continuing to extract the features of the image block by adopting the second feature extraction mode.
According to other embodiments of the present disclosure, different machine learning models, such as trained convolutional neural networks, may be employed to extract multiple types of feature information of a scene image. Specifically, the same scene image can be respectively input into different convolutional neural networks, and the characteristic image generated in the output or processing process of each convolutional neural network can correspond to one type of characteristic information, so that the characteristic extraction results of all the convolutional neural networks are combined to obtain various types of characteristic information of the scene image.
S16, constructing an offline map of the scene by using pose information corresponding to the scene image and various types of characteristic information of the scene image.
After extracting the characteristic information of various types of each scene image, the server can determine a key frame image from each video frame image, and then construct an offline map of the scene by combining a closed loop detection method, and the specific process of constructing the offline map in the exemplary embodiment is not particularly limited.
It should be understood that the offline map may include key feature point information, key frame information, map point information, and the like. The server may store an identifier of the offline map, for example, a room number corresponding to the map, GPS (Global Positioning System ) position information corresponding to the map, or the like, for uniquely representing the offline map.
In addition, a virtual object may be configured in the offline map, where the virtual object may be a virtual object such as a ball, a chess piece of a chess game, a presentation interface, a cartoon character, etc., and the type of the virtual object is not particularly limited in the present disclosure. For example, the mapping terminal may place the virtual object in a HitTest manner, and send object information, pose information, identification, and the like of the virtual object to the server. It should be noted that the identity of the virtual object may be the same as the identity of the offline map.
By the offline map processing method, the server can store the offline map and virtual object information placed in the offline map. Thus, the server can relocate a target terminal based on the information and enable the target terminal to display the corresponding virtual object.
A virtual object display method according to an exemplary embodiment of the present disclosure will be described with reference to fig. 2. Referring to fig. 2, the virtual object display method may include the steps of:
s22, acquiring a current frame image acquired by a target terminal, determining pose information of the current frame image as first pose information, and extracting various types of characteristic information from the current frame image.
In an exemplary embodiment of the present disclosure, the server may acquire a current frame image acquired by the target terminal and inertial information acquired by the IMU device, and determine pose information of the current frame image as the first pose information. It is understood that the first pose information is pose information directly determined based on data transmitted from the target terminal.
The server may extract various types of feature information from the current frame image. First, various types of feature extraction algorithms and various feature descriptors can be determined; combining different feature extraction algorithms with feature descriptors to determine a plurality of sets of feature extraction modes; then, a plurality of types of feature information may be extracted from the current frame image based on a plurality of sets of feature extraction patterns, and in addition, the current frame image may be divided into a plurality of image blocks, respectively performing a feature extraction process, in consideration of the number of feature points extracted by each feature extraction pattern and the characteristics of feature distribution on the image.
The specific process is similar to the process in step S14, and will not be described here again.
S24, determining pose information of the current frame image relative to an offline map coordinate system based on the various types of characteristic information, and taking the pose information as second pose information.
According to some embodiments of the present disclosure, a process of determining an offline map is also included. Specifically, firstly, a server can acquire an identifier of a scene where a target terminal is located, which is sent by the target terminal; then, the server may determine, from among a plurality of offline maps stored in advance, an offline map having the same identification as the identification of the scene in which the target terminal is located.
After determining the offline map, a plurality of key frames contained in the offline map may be compared with the plurality of types of feature information of the current video frame to determine a target key frame matching the plurality of types of feature information from the plurality of key frames contained in the offline map. For example, the similarity between the feature information may be determined, and the key frame with the highest similarity may be determined as the target key frame.
Next, pose information of the target key frame in the offline map coordinate system may be determined, and pose information of the current frame image with respect to the offline map coordinate system may be determined as second pose information based on the pose information.
S26, determining the relative position relation between the target terminal and the drawing building terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building an offline map.
According to the first pose information which can be directly determined by the data uploaded by the target terminal and the second pose information relative to the offline map coordinate system, the relative position relation between the target terminal and the map building terminal can be determined. Specifically, a RANSAC (Random Sample Consensus ) algorithm may be used to calculate the relative positional relationship between the target terminal and the mapping terminal.
When the second pose information is determined, various types of characteristic information are adopted, that is, calculation and analysis of various environmental factors are combined, so that the determined relative position relationship between the target terminal and the drawing building terminal is accurate.
S28, displaying the virtual object configured in the offline map on the target terminal based on the relative position relation between the target terminal and the mapping terminal.
According to some embodiments of the present disclosure, first, a server may determine pose information of a virtual object in an offline map; next, the server may convert the pose information of the virtual object in the offline map into pose information of the virtual object in the target terminal coordinate system by using the relative position relationship between the target terminal and the mapping terminal determined in step S26.
The server may then send pose information of the virtual object in the target terminal coordinate system to the target terminal to enable the virtual object to be displayed on the target terminal. In this case, the virtual object has been configured in the AR scene of the target terminal, and the user of the target terminal can cause the virtual object to be displayed on the screen of the target terminal by adjusting the angle of the target terminal.
The system architecture according to the embodiment of the present disclosure will be described below with reference to fig. 3. Referring to fig. 3, a system implementing the above-described processing procedure may include a mapping terminal 31, a target terminal 32, and a cloud server 33.
Specifically, for the process of constructing an offline map, the map construction terminal 31 transmits the acquired visual information and inertial information to the instant positioning and map construction unit 313 through the camera 311 and the inertial measurement unit 312 configured by the same. It should be understood that the instant positioning and map construction unit 313 in the map construction terminal 31 does not actually perform the map construction process, but transmits the acquired image pose information to the cloud server 33, and the cloud server 33 constructs an offline map. In addition, the application 314 of the mapping terminal 31 may acquire the image pose transmitted by the instant localization and mapping unit 313 and configure the virtual object using, for example, the HitTest method.
The virtual object processing unit 331 in the cloud server 33 may be configured to store information of the virtual object and implement a process of transmitting content corresponding to the virtual object to the target terminal 32. The mapping unit 332 may acquire image information sent by the mapping terminal 31, and implement a process of constructing an offline map based on the image pose included in the image information. The relocation unit 333 may implement relocation of the target terminal 32 in conjunction with the offline map constructed by the mapping unit 332.
The target terminal 32 acquires the current frame information by using the camera 321 and the inertia detection unit 322, and transmits the current frame information to the instant positioning map construction unit 323, and the instant positioning map construction unit 323 may transmit the current frame information to the repositioning unit 333 of the cloud server 33, and the repositioning unit 333 repositions the target terminal 32. The virtual object processing unit 331 of the cloud server 33 transmits the virtual object configured in the offline map to the target terminal 32, and the application 324 processes and displays the virtual object.
In addition, it should be noted that, the application 324 of the target terminal 32 may also obtain the image pose sent by the immediate positioning and map building unit 323, and configure another virtual object by using, for example, the HitTest method, and feed back the information of the other virtual object to the virtual object processing unit 331 of the cloud server.
Furthermore, it should be noted that the mapping terminal 31 is not limited to be used for constructing an offline map, but in other scenarios it may also be a terminal to be relocated. The drawing terminal 31 and the target terminal 32 may be identical in hardware configuration.
According to the virtual object display method, on one hand, aiming at the characteristics of different characteristic extraction algorithms, the characteristics of the images are extracted by adopting a plurality of groups of characteristic extraction modes, so that the problem that repositioning effects are poor under different angles and distances due to weak textures, repeated textures, illumination changes can be solved; on the other hand, the mapping and analysis processes of the exemplary embodiment of the disclosure are all at the server side, and the terminal equipment only needs to send the image and the corresponding pose information, so that the problem of insufficient computing capacity of the terminal equipment is solved.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
Further, in this example embodiment, an offline map processing apparatus is also provided.
Fig. 4 schematically illustrates a block diagram of an offline map processing apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 4, the offline map processing apparatus 4 according to the exemplary embodiment of the present disclosure may include a map information acquisition module 41, a map feature extraction module 43, and an offline map construction module 45.
Specifically, the mapping information obtaining module 41 may be configured to obtain a scene image collected by the mapping terminal and pose information corresponding to the scene image; the mapping feature extraction module 43 may be used to extract various types of feature information from the scene image; the offline map construction module 45 may be used to construct an offline map of a scene using pose information corresponding to the scene image and various types of feature information of the scene image.
According to an exemplary embodiment of the present disclosure, referring to fig. 5, the offline map processing apparatus 5 may further include a virtual object configuration module 51, as compared to the offline map processing apparatus 4.
Specifically, the virtual object configuration module 51 may be configured to configure a virtual object in the offline map; wherein the identity of the virtual object is the same as the identity of the offline map.
According to an example embodiment of the present disclosure, the diagramming feature extraction module 43 may be configured to perform: determining multiple types of feature extraction algorithms and multiple feature descriptors; combining different feature extraction algorithms with feature descriptors to determine a plurality of sets of feature extraction patterns; based on the plurality of sets of feature extraction modes, a plurality of types of feature information are extracted from the scene image.
According to an exemplary embodiment of the present disclosure, the plurality of sets of feature extraction modes includes a first feature extraction mode and a second feature extraction mode, in which case the mapping feature extraction module 43 may be further configured to perform: dividing a scene image into a plurality of image blocks; for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is larger than a preset threshold value, the second feature extraction mode is not adopted to extract the features of the image block; and for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is not more than a preset threshold value, carrying out feature extraction on the image block by adopting the second feature extraction mode.
Further, in this example embodiment, a virtual object display apparatus is also provided.
Fig. 6 schematically illustrates a block diagram of a virtual object display apparatus of an exemplary embodiment of the present disclosure. Referring to fig. 6, the virtual object display apparatus 6 according to an exemplary embodiment of the present disclosure may include an information acquisition module 61, a pose determination module 63, a positional relationship determination module 65, and a virtual object display module 67.
Specifically, the information obtaining module 61 may be configured to obtain a current frame image collected by the target terminal, determine pose information of the current frame image as first pose information, and extract multiple types of feature information from the current frame image; the pose determining module 63 may be configured to determine pose information of the current frame image with respect to the offline map coordinate system based on the plurality of types of feature information, as the second pose information; the position relation determining module 65 may be configured to determine a relative position relation between the target terminal and the mapping terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building an offline map; the virtual object display module 67 may be configured to display a virtual object configured in the offline map on the target terminal based on a relative positional relationship of the target terminal and the mapping terminal.
According to an exemplary embodiment of the present disclosure, referring to fig. 7, the virtual object display apparatus 7 may further include an offline map determination module 71 with respect to the virtual object display apparatus 6.
In particular, the offline map determination module 71 may be configured to perform: acquiring an identification of a scene where a target terminal is located, wherein the identification is sent by the target terminal; determining an offline map with the same identification as the identification of the scene where the target terminal is located; wherein, after determining the offline map, a process of determining the second pose information is performed.
According to an example embodiment of the present disclosure, the pose determination module 63 may be configured to perform: determining target key frames matched with the characteristic information of the types in a plurality of key frames adopted in the construction of the offline map; and determining pose information of the current frame image relative to the offline map coordinate system based on pose information of the target key frame in the offline map coordinate system, and taking the pose information as second pose information.
According to an example embodiment of the present disclosure, the virtual object display module 67 may be configured to perform: determining pose information of a virtual object in an offline map; determining pose information of the virtual object in a target terminal coordinate system according to pose information of the virtual object in an offline map by utilizing a relative position relation between the target terminal and the map building terminal; and sending pose information of the virtual object in the target terminal coordinate system to the target terminal so as to display the virtual object on the target terminal.
According to an exemplary embodiment of the present disclosure, the information acquisition module 61 may be configured to perform: determining multiple types of feature extraction algorithms and multiple feature descriptors; combining different feature extraction algorithms with feature descriptors to determine a plurality of sets of feature extraction patterns; and extracting various types of feature information from the current frame image based on the plurality of sets of feature extraction modes.
According to an exemplary embodiment of the present disclosure, the information acquisition module 61 may be further configured to perform: dividing a current frame image into a plurality of image blocks; for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is larger than a preset threshold value, the second feature extraction mode is not adopted to extract the features of the image block; for each image block, if the number of feature points extracted by adopting the first feature extraction mode is not greater than a preset threshold value, carrying out feature extraction on the image block by adopting the second feature extraction mode.
Since each functional module of the program execution performance analysis device according to the embodiment of the present invention is the same as that of the above-described method embodiment of the present invention, a detailed description thereof will be omitted.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
The program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical disk, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one storage unit 820, a bus 830 connecting the different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 810 may perform steps S12 to S16 as shown in fig. 1, and/or steps S22 to S28 as shown in fig. 2.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An offline map processing method, comprising:
acquiring a scene image acquired by a mapping terminal and pose information corresponding to the scene image;
determining multiple types of feature extraction algorithms and multiple feature descriptors, combining different feature extraction algorithms with the feature descriptors to determine multiple sets of feature extraction modes, and extracting multiple types of feature information from the scene image based on the multiple sets of feature extraction modes;
And constructing an offline map of the scene by using pose information corresponding to the scene image and various types of characteristic information of the scene image.
2. The offline map processing method according to claim 1, characterized in that the offline map processing method further comprises:
configuring a virtual object in the offline map;
wherein the identity of the virtual object is the same as the identity of the offline map.
3. The offline map processing method according to claim 1, characterized in that the plurality of sets of feature extraction modes include a first feature extraction mode and a second feature extraction mode; wherein extracting a plurality of types of feature information from the scene image based on the plurality of sets of feature extraction patterns includes:
dividing the scene image into a plurality of image blocks;
for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is larger than a preset threshold value, the feature extraction is not carried out on the image block by adopting the second feature extraction mode;
and for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is not larger than the preset threshold value, carrying out feature extraction on the image block by adopting the second feature extraction mode.
4. A virtual object display method, comprising:
acquiring a current frame image acquired by a target terminal, determining pose information of the current frame image as first pose information, determining multiple types of feature extraction algorithms and multiple feature descriptors, combining different feature extraction algorithms with the feature descriptors to determine multiple groups of feature extraction modes, and extracting multiple types of feature information from the current frame image based on the multiple groups of feature extraction modes;
determining pose information of the current frame image relative to an offline map coordinate system based on the multiple types of characteristic information, wherein the pose information is used as second pose information;
determining the relative position relation between the target terminal and the drawing building terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building the offline map;
and displaying the virtual object configured in the offline map on the target terminal based on the relative position relation between the target terminal and the mapping terminal.
5. The virtual object display method according to claim 4, further comprising:
Acquiring an identifier of a scene where the target terminal is located, wherein the identifier is sent by the target terminal;
determining an offline map with the same identification as the identification of the scene where the target terminal is located;
and after the offline map is determined, executing the process of determining the second pose information.
6. The virtual object display method according to claim 4 or 5, wherein determining pose information of the current frame image with respect to an offline map coordinate system based on the plurality of types of feature information, as the second pose information, comprises:
determining target key frames matched with the characteristic information of the types in a plurality of key frames adopted in the construction of the offline map;
and determining pose information of the current frame image relative to the offline map coordinate system based on the pose information of the target key frame in the offline map coordinate system, and taking the pose information as the second pose information.
7. The virtual object display method according to claim 4, wherein displaying the virtual object configured in the offline map on the target terminal based on the relative positional relationship of the target terminal and the mapping terminal includes:
Determining pose information of the virtual object in the offline map;
determining pose information of the virtual object in the target terminal coordinate system according to pose information of the virtual object in the offline map by utilizing the relative position relation between the target terminal and the mapping terminal;
and sending pose information of the virtual object in the target terminal coordinate system to the target terminal so as to enable the virtual object to be displayed on the target terminal.
8. The virtual object display method according to claim 4, wherein the plurality of sets of feature extraction modes include a first feature extraction mode and a second feature extraction mode; wherein extracting a plurality of types of feature information from the current frame image based on the plurality of sets of feature extraction modes includes:
dividing the current frame image into a plurality of image blocks;
for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is larger than a preset threshold value, the feature extraction is not carried out on the image block by adopting the second feature extraction mode;
and for each image block, if the number of the feature points extracted by adopting the first feature extraction mode is not larger than the preset threshold value, carrying out feature extraction on the image block by adopting the second feature extraction mode.
9. An offline map processing apparatus, comprising:
the image construction information acquisition module is used for acquiring a scene image acquired by the image construction terminal and pose information corresponding to the scene image;
the image construction feature extraction module is used for determining multiple types of feature extraction algorithms and multiple feature descriptors, combining different feature extraction algorithms with the feature descriptors to determine multiple groups of feature extraction modes, and extracting multiple types of feature information from the scene image based on the multiple groups of feature extraction modes;
and the offline map construction module is used for constructing an offline map of the scene by utilizing pose information corresponding to the scene image and various types of characteristic information of the scene image.
10. A virtual object display device, comprising:
the information acquisition module is used for acquiring a current frame image acquired by a target terminal, determining pose information of the current frame image as first pose information, determining multiple types of feature extraction algorithms and multiple feature descriptors, combining different feature extraction algorithms with the feature descriptors to determine multiple groups of feature extraction modes, and extracting multiple types of feature information from the current frame image based on the multiple groups of feature extraction modes;
The pose determining module is used for determining pose information of the current frame image relative to an offline map coordinate system based on the multiple types of characteristic information, and the pose information is used as second pose information;
the position relation determining module is used for determining the relative position relation between the target terminal and the mapping terminal according to the first pose information and the second pose information; the map building terminal is a terminal for building the offline map;
and the virtual object display module is used for displaying the virtual object configured in the offline map on the target terminal based on the relative position relation between the target terminal and the mapping terminal.
11. A storage medium having stored thereon a computer program, which when executed by a processor implements the offline map processing method of any one of claims 1 to 3, or the virtual object display method of any one of claims 4 to 8.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the offline map processing method of any one of claims 1 to 3, or the virtual object display method of any one of claims 4 to 8, via execution of the executable instructions.
CN201910693978.3A 2019-07-30 2019-07-30 Offline map processing and virtual object display method, device, medium and equipment Active CN110457414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693978.3A CN110457414B (en) 2019-07-30 2019-07-30 Offline map processing and virtual object display method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693978.3A CN110457414B (en) 2019-07-30 2019-07-30 Offline map processing and virtual object display method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN110457414A CN110457414A (en) 2019-11-15
CN110457414B true CN110457414B (en) 2023-06-09

Family

ID=68483959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693978.3A Active CN110457414B (en) 2019-07-30 2019-07-30 Offline map processing and virtual object display method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN110457414B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969682B (en) * 2019-11-27 2021-03-02 深圳追一科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN111078003B (en) * 2019-11-27 2021-10-22 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment and storage medium
CN110910432A (en) * 2019-12-09 2020-03-24 珠海大横琴科技发展有限公司 Remote sensing image matching method and device, electronic equipment and readable storage medium
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111640142A (en) * 2019-12-25 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image multi-feature matching method and device and electronic equipment
CN111177167B (en) * 2019-12-25 2024-01-19 Oppo广东移动通信有限公司 Augmented reality map updating method, device, system, storage and equipment
CN111583312A (en) * 2019-12-26 2020-08-25 珠海大横琴科技发展有限公司 Method and device for accurately matching remote sensing images, electronic equipment and storage medium
CN111612828A (en) * 2019-12-27 2020-09-01 珠海大横琴科技发展有限公司 Remote sensing image correction matching method and device, electronic equipment and storage medium
CN111311756B (en) * 2020-02-11 2023-04-11 Oppo广东移动通信有限公司 Augmented reality AR display method and related device
CN115023743A (en) * 2020-02-13 2022-09-06 Oppo广东移动通信有限公司 Surface detection and tracking in augmented reality sessions based on sparse representations
CN111325842B (en) * 2020-03-04 2023-07-28 Oppo广东移动通信有限公司 Map construction method, repositioning method and device, storage medium and electronic equipment
CN111698646B (en) * 2020-06-08 2022-10-18 浙江商汤科技开发有限公司 Positioning method and device
CN112598732A (en) * 2020-12-10 2021-04-02 Oppo广东移动通信有限公司 Target equipment positioning method, map construction method and device, medium and equipment
WO2023070441A1 (en) * 2021-10-28 2023-05-04 深圳市大疆创新科技有限公司 Movable platform positioning method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025662B (en) * 2016-01-29 2020-06-09 成都理想境界科技有限公司 Method, server, terminal and system for realizing augmented reality
CN107741234B (en) * 2017-10-11 2021-10-19 深圳勇艺达机器人有限公司 Off-line map construction and positioning method based on vision

Also Published As

Publication number Publication date
CN110457414A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110457414B (en) Offline map processing and virtual object display method, device, medium and equipment
CN110555883B (en) Repositioning method and device for camera attitude tracking process and storage medium
CN107820593B (en) Virtual reality interaction method, device and system
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2019242262A1 (en) Augmented reality-based remote guidance method and device, terminal, and storage medium
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
CN110296686B (en) Vision-based positioning method, device and equipment
US20130010071A1 (en) Methods and systems for mapping pointing device on depth map
US20230274471A1 (en) Virtual object display method, storage medium and electronic device
JP6348741B2 (en) Information processing system, information processing apparatus, information processing program, and information processing method
US20140009384A1 (en) Methods and systems for determining location of handheld device within 3d environment
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
CN110544272A (en) face tracking method and device, computer equipment and storage medium
CN111784776B (en) Visual positioning method and device, computer readable medium and electronic equipment
CN112927363A (en) Voxel map construction method and device, computer readable medium and electronic equipment
US20180020203A1 (en) Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium
US20220358662A1 (en) Image generation method and device
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
US20220157032A1 (en) Multi-modality localization of users
CN112365530A (en) Augmented reality processing method and device, storage medium and electronic equipment
CN112788443B (en) Interaction method and system based on optical communication device
CN109816791B (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant