CN111623782A - Navigation route display method and three-dimensional scene model generation method and device - Google Patents

Navigation route display method and three-dimensional scene model generation method and device Download PDF

Info

Publication number
CN111623782A
CN111623782A CN202010525784.5A CN202010525784A CN111623782A CN 111623782 A CN111623782 A CN 111623782A CN 202010525784 A CN202010525784 A CN 202010525784A CN 111623782 A CN111623782 A CN 111623782A
Authority
CN
China
Prior art keywords
floor
model
navigation route
destination
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010525784.5A
Other languages
Chinese (zh)
Inventor
揭志伟
武明飞
符修源
陈凯彬
李炳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010525784.5A priority Critical patent/CN111623782A/en
Publication of CN111623782A publication Critical patent/CN111623782A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The disclosure provides a display method of a navigation route, a generation method of a three-dimensional scene model and a device thereof, wherein the display method comprises the following steps: responding to a navigation request aiming at a target real scene, and acquiring a multi-floor three-dimensional scene model corresponding to the target real scene, which is established in advance; displaying the multi-floor three-dimensional scene model, and acquiring destination information selected by a user based on the multi-floor three-dimensional scene model; and generating a navigation route based on the destination information, and displaying the navigation route in the three-dimensional scene model of the multiple floors.

Description

Navigation route display method and three-dimensional scene model generation method and device
Technical Field
The present disclosure relates to the field of navigation technologies, and in particular, to a display method of a navigation route, a generation method and apparatus of a three-dimensional scene model, an electronic device, and a storage medium.
Background
With the development of economy, in order to meet the needs of the spirit level of a large number of users, a large number of exhibition halls, such as science and technology exhibition halls, art exhibition halls, historical exhibition halls and the like, generally, for a large-scale exhibition hall including a plurality of floors, a user may not be able to quickly find a destination when visiting the large-scale exhibition hall.
Therefore, it is desirable to provide an effective navigation method for large indoor locations.
Disclosure of Invention
The embodiment of the disclosure at least provides a navigation path display scheme.
In a first aspect, an embodiment of the present disclosure provides a display method of a navigation route, where the display method includes:
responding to a navigation request aiming at a target real scene, and acquiring a pre-established three-dimensional scene model corresponding to the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors respectively;
acquiring a destination selected by a user aiming at the target reality scene, and generating a navigation route based on the current location of the user and the destination;
and displaying the navigation route based on at least one floor model associated with the navigation route.
In the embodiment of the disclosure, the navigation route for indicating how the user arrives at the destination from the current location can be vividly displayed through at least one floor model in the three-dimensional scene model, and the intuitiveness of the navigation guidance is increased.
In one possible embodiment, the three-dimensional scene model is generated according to the following steps:
acquiring a plurality of real scene images corresponding to each floor in the target real scene;
building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
and generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene.
In a possible implementation manner, the building a floor model corresponding to each floor based on a plurality of real scene images corresponding to the floor includes:
extracting a plurality of feature points from each of a plurality of real scene images corresponding to each floor;
generating a floor model corresponding to the floor based on the extracted multiple feature points corresponding to the floor and a prestored three-dimensional sample graph matched with the floor; the three-dimensional sample graph is a pre-stored three-dimensional graph representing the floor topography.
In a possible implementation manner, the generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene includes:
determining real height difference information between every two adjacent floors in the plurality of floors based on the real height information of each floor;
and generating the three-dimensional scene model based on the real height difference information between every two adjacent floors in the plurality of floors and the floor model corresponding to each floor.
In the embodiment of the disclosure, the height between the floor models can be adjusted by determining the real height difference information between every two adjacent floors in the multiple floors, so that a three-dimensional scene model with a high matching degree with a target real scene is generated.
In a possible implementation, after generating the three-dimensional scene model, the presentation method further includes:
determining the position coordinates of the feature points for representing each preset navigation destination in the three-dimensional scene model;
and associating and storing the three-dimensional scene model with the feature point position coordinates of each preset navigation destination in the three-dimensional scene model.
In the embodiment of the disclosure, the position coordinates of the feature points representing each preset navigation destination in the three-dimensional scene model can be predetermined and stored, so that the navigation route can be determined based on the position coordinates in the later period.
In one possible embodiment, the generating a navigation route based on the current location of the user and the destination includes:
based on the current location, searching a departure place position coordinate corresponding to the current location in the three-dimensional scene model, and based on the destination, searching a destination position coordinate corresponding to the destination in the three-dimensional scene model;
and determining the navigation route based on the departure place position coordinate, the destination position coordinate and the obstacle position area contained in the floor model corresponding to the departure place position coordinate and the destination position coordinate.
In one possible embodiment, the generating a navigation route based on the current location of the user and the destination includes:
and generating a cross-floor navigation route based on the current location and the destination in the case that the current location and the destination are on different floors.
The displaying the navigation route based on the at least one floor model associated with the navigation route comprises:
displaying the cross-floor navigation route on a plurality of floor models associated with the navigation route;
in case the current location and destination are on different floors, a cross-floor navigation route can be provided, making the navigation more visual.
In a second aspect, an embodiment of the present disclosure provides a method for generating a three-dimensional scene model, where the method includes:
acquiring a plurality of real scene images corresponding to each floor in a target real scene;
building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
generating a three-dimensional scene model representing the target reality scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene; the three-dimensional scene model is used for displaying the navigation route according to the first aspect.
In a third aspect, an embodiment of the present disclosure provides a display device for a navigation route, where the display device includes:
the model acquisition module is used for responding to a navigation request aiming at a target real scene and acquiring a pre-established three-dimensional scene model corresponding to the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors respectively;
the route generation module is used for acquiring a destination selected by a user aiming at the target reality scene and generating a navigation route based on the current location of the user and the destination;
and the route display module is used for displaying the navigation route based on at least one floor model associated with the navigation route.
In a possible implementation, the presentation apparatus further comprises a model generation module configured to generate the three-dimensional scene model according to the following steps:
acquiring a plurality of real scene images corresponding to each floor in the target real scene;
building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
and generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene.
In a possible implementation manner, the model generating module is configured to, when constructing the floor model corresponding to each floor based on a plurality of real scene images corresponding to the floor, include:
extracting a plurality of feature points from each of a plurality of real scene images corresponding to each floor;
generating a floor model corresponding to the floor based on the extracted multiple feature points corresponding to the floor and a prestored three-dimensional sample graph matched with the floor; the three-dimensional sample graph is a pre-stored three-dimensional graph representing the floor topography.
In a possible implementation manner, the model generating module is configured to, when generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene, include:
determining real height difference information between every two adjacent floors in the plurality of floors based on the real height information of each floor;
and generating the three-dimensional scene model based on the real height difference information between every two adjacent floors in the plurality of floors and the floor model corresponding to each floor.
In a possible implementation, after generating the three-dimensional scene model, the model generation module is further configured to:
determining the position coordinates of the feature points for representing each preset navigation destination in the three-dimensional scene model;
and associating and storing the three-dimensional scene model with the feature point position coordinates of each preset navigation destination in the three-dimensional scene model.
In one possible embodiment, the route generation module, when configured to generate a navigation route based on the current location of the user and the destination, comprises:
based on the current location, searching a departure place position coordinate corresponding to the current location in the three-dimensional scene model, and based on the destination, searching a destination position coordinate corresponding to the destination in the three-dimensional scene model;
and determining the navigation route based on the departure place position coordinate, the destination position coordinate and the obstacle position area contained in the floor model corresponding to the departure place position coordinate and the destination position coordinate.
In one possible embodiment, the route generation module, when configured to generate a navigation route based on the current location of the user and the destination, comprises:
under the condition that the current location and the destination are located on different floors, generating a cross-floor navigation route based on the current location and the destination;
the route presentation module, when configured to present the navigation route based on the at least one floor model associated with the navigation route, comprises:
displaying the cross-floor navigation route on a plurality of floor models associated with the navigation route.
In a fourth aspect, an embodiment of the present disclosure provides an apparatus for generating a three-dimensional scene model, where the method includes:
the image acquisition module is used for acquiring a plurality of real scene images corresponding to each floor in a target real scene;
the first generation module is used for constructing a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
the second generation module is used for generating a three-dimensional scene model representing the target reality scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene; the three-dimensional scene model is used for displaying the navigation route according to the first aspect.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation method as described in the first aspect or the generation method as described in the second aspect.
In a sixth aspect, the disclosed embodiments provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the presentation method according to the first aspect or performs the steps of the generation method according to the second aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart illustrating a method for displaying a navigation route according to an embodiment of the present disclosure;
FIG. 2a is an interface diagram for displaying navigation prompts provided by an embodiment of the present disclosure;
FIG. 2b illustrates a display interface diagram of a floor model provided by an embodiment of the present disclosure;
FIG. 2c illustrates a display interface diagram of another floor model provided by an embodiment of the present disclosure;
FIG. 2d is a diagram illustrating a display interface of a navigation route provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method for generating a three-dimensional scene model by a client according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for generating a floor model according to an embodiment of the disclosure;
FIG. 5 is a flowchart illustrating a specific generation method of a three-dimensional scene model according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a method for storing a three-dimensional scene model and position coordinates according to an embodiment of the present disclosure;
FIG. 7 illustrates a flowchart of a method of determining a navigation route provided by an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a method for generating a three-dimensional scene model by a server according to an embodiment of the present disclosure
FIG. 9 is a schematic structural diagram of a display device for a navigation route provided by an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram illustrating an apparatus for generating a three-dimensional scene model according to an embodiment of the present disclosure;
fig. 11 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure;
fig. 12 shows a schematic structural diagram of another electronic device provided in the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The user can possibly face the problem that the user cannot find the destination quickly when visiting some large exhibition halls, and for the purpose, a plane map can be displayed for the user in the exhibition halls so as to prompt the user where the user is located currently, and the user can check the positions of all target scenic spots of the whole exhibition halls. However, the user may still not know how to reach the destination, and the indoor navigation cannot be accurately achieved by the existing navigation device, so that a way for helping the user to determine how to reach the destination is provided, which is the scheme to be discussed in the present disclosure.
Based on the above research, the present disclosure provides a method for displaying a navigation route, which, upon receiving a navigation request triggered by a user, can obtain a multi-floor three-dimensional scene model corresponding to the target real scene which is constructed in advance, and when a user selects a destination, a navigation route is generated according to the current location of the user and the selected destination, the navigation route is then presented based on at least one floor model associated with the navigation route, for example, if the user is not on the same floor between the current location and the destination, such as the current location is at one floor in the target real-world scene, and the destination is two floors in the target real-world scene, the navigation route is associated with two floors, that is, a navigation route indicating how to reach a destination of the second floor from the current location at the first floor can be presented, by which the navigation route can be visually presented to the user.
To facilitate understanding of the present embodiment, first, a method for displaying a navigation route disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the method for displaying a navigation route provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a handheld device, a computing device, a wearable device, or a server or other processing device. In some possible implementations, the method for presenting the navigation route may be implemented by a processor calling computer-readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a method for displaying a navigation route according to an embodiment of the present disclosure is shown, where the method for displaying a navigation route includes steps S101 to S103.
S101, responding to a navigation request aiming at a target real scene, and acquiring a pre-established three-dimensional scene model corresponding to the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors.
The terminal device may be, for example and without limitation, a mobile phone, a tablet, a computer device, or an intelligent display screen provided in a target reality scene.
Illustratively, the target reality scene may be a large indoor scene, such as a large exhibition hall, and the user may trigger a navigation request for the target reality scene at an entrance of the large exhibition hall, such as a request for viewing a venue route map may be triggered in an intelligent display screen disposed at the entrance of the large exhibition hall, so as to generate the navigation request for the large exhibition hall.
For some large exhibition halls, for example, for an intelligent exhibition hall capable of performing AR experience, a terminal device may be provided at a place where the terminal device enters the exhibition hall, as shown in fig. 2a, the terminal device may display a prompt diagram prompting a user to view an exhibition hall route, and the user may trigger a request for viewing the exhibition hall route in the prompt diagram, so that the terminal device may detect a navigation request for a target real scene triggered by the user, and when detecting the navigation request, acquire a three-dimensional scene model corresponding to the target real scene established in advance.
Illustratively, the three-dimensional scene model may be a model that is pre-established and stored locally by the terminal device, or may be established by a background server, and when the terminal device detects a navigation request for a target real scene, the terminal device requests the background server to acquire the three-dimensional scene model corresponding to the target real scene.
For example, when the target real scene includes a plurality of floors, the three-dimensional scene model corresponding to the target real scene also includes floor models corresponding to the plurality of floors, and the floor model corresponding to each floor is also a three-dimensional model.
S102, obtaining a destination selected by the user aiming at the target real scene, and generating a navigation route based on the current location and the destination of the user.
Exemplarily, when the terminal device obtains the pre-established three-dimensional scene model corresponding to the target real scene, the terminal device may default to display the initial floor model of the floor corresponding to the current location of the user, for example, if the current location of the user is located at a position of one floor of the target real scene, the default displayed initial floor model is the floor model corresponding to one floor, as shown in fig. 2b, when the current location of the user is located at one floor of the exhibition hall, the default displayed initial floor model and the schematic diagram of the corresponding candidate sight spot are shown.
Exemplarily, when the floor model of the floor corresponding to the current location of the user is displayed, candidate sights corresponding to the floor corresponding to the current location can be displayed at the same time, and after the user triggers any candidate sight, the position of the candidate sight can be used as the destination selected by the user for the real scene.
Exemplarily, when the floor corresponding to the current location does not include the sight spot that the user wants to visit, the user can also select to display the floor model corresponding to the target floor, so that the terminal device can display the floor model corresponding to the target floor, the initial floor model corresponding to the current location of the user, and other floor models between the initial floor model and the target floor. For example, as shown in fig. 2c, when the user selects to show the second floor of the exhibition hall, the target floor model corresponding to the second floor of the exhibition hall and the candidate sights included in the target floor model may be shown.
Further, when the user selects the destination, a navigation route pointing from the current location to the destination may be generated based on the current location and the destination, for example, when the current location and the destination are located on the same floor, the navigation route is displayed on the same floor, and when the current location a and the destination B do not belong to the same floor, the navigation route is changed to a cross-building navigation route, for example, when the current location is on one floor, and the destination is on two floors, the cross-building navigation route shown in fig. 2d may be displayed.
S103, displaying the navigation route based on at least one floor model associated with the navigation route.
Illustratively, the navigation route is associated with one floor model when the current location and the destination are on the same floor, and is associated with a plurality of floor models when the current location and the destination are on different floors.
In the embodiment of the disclosure, the navigation route for indicating how the user arrives at the destination from the current location can be vividly displayed through at least one floor model in the three-dimensional scene model, and the intuitiveness of the navigation guidance is increased.
The above-mentioned S101 to S103 will be specifically described with reference to specific embodiments.
In one embodiment, as shown in fig. 3, generating a three-dimensional scene model according to the following steps specifically comprises S301 to S303:
s301, acquiring a plurality of real scene images corresponding to each floor in a target real scene;
s302, building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
and S303, generating a three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene.
For S301, for example, shooting may be performed on each floor in the target real scene in advance through the image capturing device, so as to obtain multiple real scene images corresponding to each floor.
As shown in fig. 4, constructing a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor in S302 may include the following S3021 to S3032:
s3021, extracting a plurality of feature points from each of a plurality of acquired real scene images corresponding to each floor;
s3022, generating a floor model corresponding to the floor based on the extracted multiple feature points corresponding to the floor and a prestored three-dimensional sample map matched with the floor; the three-dimensional sample graph is a pre-stored three-dimensional graph representing the floor topography.
Specifically, the feature points extracted for each real scene image may be points capable of representing key information of the real scene, such as for a real scene image containing a building, where the feature points may represent feature points of the building outline information.
Illustratively, the three-dimensional sample graph pre-stored in the memory and matched with the floor may include a three-dimensional graph which is set in advance and can characterize the floor topography and has dimension marks, such as a Computer Aided Design (CAD) three-dimensional graph characterizing the floor topography.
And aiming at each floor, when the extracted feature points are sufficient, the feature point cloud formed by the feature points can form a three-dimensional model representing the floor, the feature points in the feature point cloud are unitless, the three-dimensional model formed by the feature point cloud is also unitless, and then the feature point cloud is aligned with a three-dimensional graph which is provided with scale marks and can represent the floor appearance features, so that the floor model corresponding to the floor is obtained.
As for the above S303, when generating the three-dimensional scene model based on the built floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene, as shown in fig. 5, the following S3031 to S3032 may be included:
s3031, determining real height difference information between every two adjacent floors in a plurality of floors based on the real height information of each floor;
s3032, generating a three-dimensional scene model based on the real height difference information between every two adjacent floors in the multiple floors and the floor model corresponding to each floor.
When the three-dimensional scene model is generated, the height difference between the floor models corresponding to every two adjacent floors can be adjusted through the real height difference information between every two adjacent floors, so that the three-dimensional scene model with high matching degree with the target real scene is obtained, and the obtained three-dimensional scene model and the target real scene are in accordance with the following formula 1 when appearing in the same coordinate system: 1 proportion, namely the three-dimensional scene model can be completely coincided with the target real scene.
In a possible implementation manner, after the three-dimensional scene model is generated, as shown in fig. 6, the display method provided by the embodiment of the present disclosure further includes S601 to S602:
s601, determining the position coordinates of the feature points representing each preset navigation destination in the three-dimensional scene model;
and S602, associating the three-dimensional scene model with the feature point position coordinates of each preset navigation destination in the three-dimensional scene model, and storing.
For example, positions of the target real scenes, which can be used as destinations, may be counted in advance, and taking an exhibition hall as an example, positions of the sights, which can be used as sights, in the exhibition hall may be counted in advance, then position coordinates of feature points representing each preset navigation destination are found in the three-dimensional scene model, and then the three-dimensional scene model and the position coordinates of the feature points representing the preset navigation destinations are stored in an associated manner.
After the association storage, when generating the navigation route based on the current location and the destination of the user as mentioned in the above S102, as shown in fig. 7, the following steps S1021 to S1022 may be included:
s1021, based on the current location, searching the departure place position coordinate corresponding to the current location in the three-dimensional scene model, and based on the destination, searching the destination position coordinate corresponding to the destination in the three-dimensional scene model;
s1022, a navigation route is determined based on the departure location coordinate, the destination location coordinate, and the obstacle location area included in the floor model corresponding to each of the departure location coordinate and the destination location coordinate.
For example, the navigation route may be determined based on the coordinates of the departure location of the current location in the preset coordinate system, the coordinates of the destination location of the destination in the preset coordinate system, the location area of the obstacle corresponding to the floor where the current location is located in the preset coordinate system, and the location area of the obstacle corresponding to the floor where the destination is located in the preset coordinate system.
Illustratively, multiple navigation routes may be determined for selection by the user.
In one possible embodiment, when generating the navigation route based on the current location and destination of the user, the method includes:
under the condition that the current location and the destination are located on different floors, generating a cross-floor navigation route based on the current location and the destination;
presenting a navigation route based on at least one floor model associated with the navigation route, comprising:
the cross-floor navigation route is presented on a plurality of floor models associated with the navigation route.
For example, when the current location is at one floor and the destination is at two floors, the generated navigation route includes a part of the route at one floor and a part of the route at two floors, and the user can be more intuitively indicated that the destination is at the other floors of the current location.
In case the current location and destination are on different floors, a cross-floor navigation route can be provided, making the navigation more visual.
As shown in fig. 8, a method for generating a three-dimensional scene model according to an embodiment of the present disclosure includes the following steps S801 to S803:
s801, acquiring a plurality of real scene images corresponding to each floor in a target real scene;
s802, building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
s803, generating a three-dimensional scene model representing a target real scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene; the three-dimensional scene model is used for displaying a navigation route corresponding to the target reality scene.
The process of specifically determining the three-dimensional scene model corresponding to the target real scene is detailed above, and is not repeated here. When the terminal device requests the server to acquire the three-dimensional scene model corresponding to the target real scene, which is generated in advance, may be sent to the requesting terminal device.
Based on the same inventive concept, the embodiment of the present disclosure further provides a display apparatus of a navigation route corresponding to the display method of the navigation route, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the display method in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 9, a schematic diagram of a display apparatus 900 for a navigation route provided in an embodiment of the present disclosure is shown, the display apparatus including:
a model obtaining module 901, configured to obtain a pre-established three-dimensional scene model corresponding to a target real scene in response to a navigation request for the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors respectively;
a route generation module 902, configured to obtain a destination selected by a user for a target real scene, and generate a navigation route based on a current location and the destination of the user;
a route display module 903 for displaying the navigation route based on the at least one floor model associated with the navigation route.
In a possible implementation, the presentation apparatus further comprises a model generation module 904, the model generation module 904 being configured to generate the three-dimensional scene model according to the following steps:
acquiring a plurality of real scene images corresponding to each floor in a target real scene;
building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
and generating a three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene.
In a possible implementation manner, the model generating module 904 is configured to, based on a plurality of real scene images corresponding to each floor, when constructing a floor model corresponding to the floor, including:
extracting a plurality of feature points from each of a plurality of real scene images corresponding to each floor;
generating a floor model corresponding to the floor based on the extracted multiple feature points corresponding to the floor and a prestored three-dimensional sample graph matched with the floor; the three-dimensional sample graph is a pre-stored three-dimensional graph representing the floor topography.
In a possible implementation manner, the model generating module 904 is configured to, when generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene, include:
determining real height difference information between every two adjacent floors in the multiple floors based on the real height information of each floor;
and generating a three-dimensional scene model based on the real height difference information between every two adjacent floors in the plurality of floors and the floor model corresponding to each floor.
In a possible implementation, after generating the three-dimensional scene model, the model generation module 904 is further configured to:
determining the position coordinates of the feature points representing each preset navigation destination in the three-dimensional scene model;
and associating and storing the three-dimensional scene model with the position coordinates of the feature points of each preset navigation destination in the three-dimensional scene model.
In one possible implementation, the route generation module 902, when used to generate a navigation route based on the current location and destination of the user, comprises:
based on the current location, searching a departure place position coordinate corresponding to the current location in the three-dimensional scene model, and based on the destination, searching a destination position coordinate corresponding to the destination in the three-dimensional scene model;
and determining a navigation route based on the departure place position coordinate, the destination position coordinate and the obstacle position area contained in the floor model corresponding to the departure place position coordinate and the destination position coordinate.
In one possible implementation, the route generation module 902, when used to generate a navigation route based on the current location and destination of the user, comprises:
in the case where the current location and the destination are on different floors, a cross-floor navigation route is generated based on the current location and the destination.
The route presentation module, when configured to present the navigation route based on the at least one floor model associated with the navigation route, comprises:
the cross-floor navigation route is presented on a plurality of floor models associated with the navigation route.
Referring to fig. 10, there is shown a schematic diagram of a generating apparatus 1000 for a three-dimensional scene model according to an embodiment of the present disclosure, the generating apparatus includes:
the image acquisition module 1001 is used for acquiring a plurality of real scene images corresponding to each floor in a target real scene;
the first generating module 1002 is configured to construct a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
a second generating module 1003, configured to generate a three-dimensional scene model representing a target reality scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene; the three-dimensional scene model is used for displaying a navigation route corresponding to the target real scene.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Corresponding to the display method of the navigation route in fig. 1, an embodiment of the present disclosure further provides an electronic device 1100, as shown in fig. 11, a schematic structural diagram of the electronic device 1100 provided in the embodiment of the present disclosure includes:
a processor 111, a memory 112, and a bus 113; the storage 112 is used for storing execution instructions and includes a memory 1121 and an external storage 1122; the memory 1121 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 111 and data exchanged with the external memory 1122 such as a hard disk, the processor 111 exchanges data with the external memory 1122 via the memory 1121, and when the electronic device 1100 is operated, the processor 111 communicates with the memory 112 via the bus 113, so that the processor 111 executes the following instructions: responding to a navigation request aiming at a target real scene, and acquiring a pre-established three-dimensional scene model corresponding to the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors respectively; acquiring a destination selected by a user aiming at a target real scene, and generating a navigation route based on the current location and the destination of the user; and displaying the navigation route based on the at least one floor model associated with the navigation route.
Corresponding to the generation method of the three-dimensional scene model in fig. 8, an embodiment of the present disclosure further provides an electronic device 1200, as shown in fig. 12, a schematic structural diagram of the electronic device 1200 provided in the embodiment of the present disclosure includes:
a processor 121, a memory 122, and a bus 123; the memory 122 is used for storing execution instructions and includes a memory 1221 and an external memory 1222; the memory 1221 is also referred to as an internal memory, and is used to temporarily store operation data in the processor 121 and data exchanged with the external memory 1222 such as a hard disk, the processor 121 exchanges data with the external memory 1222 through the memory 1221, and when the electronic device 1200 is operated, the processor 121 and the memory 122 communicate with each other through the bus 123, so that the processor 121 executes the following instructions: acquiring a plurality of real scene images corresponding to each floor in a target real scene; building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor; generating a three-dimensional scene model representing a target reality scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene; the three-dimensional scene model is used for displaying a navigation route corresponding to the target real scene.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the presentation method or the generation method described in the above method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the display method or the generation method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the display method or the generation method described in the above method embodiment, which may be referred to in the above method embodiment specifically, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A display method of a navigation route is characterized by comprising the following steps:
responding to a navigation request aiming at a target real scene, and acquiring a pre-established three-dimensional scene model corresponding to the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors respectively;
acquiring a destination selected by a user aiming at the target reality scene, and generating a navigation route based on the current location of the user and the destination;
and displaying the navigation route based on at least one floor model associated with the navigation route.
2. The presentation method of claim 1, wherein the three-dimensional scene model is generated according to the following steps:
acquiring a plurality of real scene images corresponding to each floor in the target real scene;
building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
and generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target real scene.
3. The display method according to claim 2, wherein the building a floor model corresponding to each floor based on the plurality of real scene images corresponding to the floor comprises:
extracting a plurality of feature points from each of a plurality of real scene images corresponding to each floor;
generating a floor model corresponding to the floor based on the extracted multiple feature points corresponding to the floor and a prestored three-dimensional sample graph matched with the floor; the three-dimensional sample graph is a pre-stored three-dimensional graph representing the floor topography.
4. The display method according to claim 2 or 3, wherein the generating the three-dimensional scene model based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene comprises:
determining real height difference information between every two adjacent floors in the plurality of floors based on the real height information of each floor;
and generating the three-dimensional scene model based on the real height difference information between every two adjacent floors in the plurality of floors and the floor model corresponding to each floor.
5. The presentation method according to any one of claims 2 to 4, wherein after generating the three-dimensional scene model, the presentation method further comprises:
determining the position coordinates of the feature points for representing each preset navigation destination in the three-dimensional scene model;
and associating and storing the three-dimensional scene model with the feature point position coordinates of each preset navigation destination in the three-dimensional scene model.
6. The presentation method according to claim 5, wherein the generating a navigation route based on the current location of the user and the destination comprises:
based on the current location, searching a departure place position coordinate corresponding to the current location in the three-dimensional scene model, and based on the destination, searching a destination position coordinate corresponding to the destination in the three-dimensional scene model;
and determining the navigation route based on the departure place position coordinate, the destination position coordinate and the obstacle position area contained in the floor model corresponding to the departure place position coordinate and the destination position coordinate.
7. The presentation method according to any one of claims 1 to 6, wherein the generating a navigation route based on the current location of the user and the destination comprises:
under the condition that the current location and the destination are located on different floors, generating a cross-floor navigation route based on the current location and the destination;
the displaying the navigation route based on the at least one floor model associated with the navigation route comprises:
displaying the cross-floor navigation route on a plurality of floor models associated with the navigation route.
8. A method for generating a three-dimensional scene model, the method comprising:
acquiring a plurality of real scene images corresponding to each floor in a target real scene;
building a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
generating a three-dimensional scene model representing the target reality scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene; the three-dimensional scene model is used for displaying the navigation route according to any one of claims 1 to 7.
9. A presentation device of a navigation route, characterized in that the presentation device comprises:
the model acquisition module is used for responding to a navigation request aiming at a target real scene and acquiring a pre-established three-dimensional scene model corresponding to the target real scene; the three-dimensional scene model comprises floor models corresponding to a plurality of floors respectively;
the route generation module is used for acquiring a destination selected by a user aiming at the target reality scene and generating a navigation route based on the current location of the user and the destination;
and the route display module is used for displaying the navigation route based on at least one floor model associated with the navigation route.
10. A generation device of a three-dimensional scene model is characterized in that the generation method comprises the following steps:
the image acquisition module is used for acquiring a plurality of real scene images corresponding to each floor in a target real scene;
the first generation module is used for constructing a floor model corresponding to each floor based on a plurality of real scene images corresponding to each floor;
the second generation module is used for generating a three-dimensional scene model representing the target reality scene based on the constructed floor model corresponding to each floor and the real height information of each floor in the multiple floors corresponding to the target reality scene; the three-dimensional scene model is used for displaying the navigation route according to any one of claims 1 to 7.
11. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation method of any one of claims 1 to 7 or performing the steps of the generation method of claim 8.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, performs the steps of the presentation method as claimed in one of the claims 1 to 7 or the steps of the generation method as claimed in claim 8.
CN202010525784.5A 2020-06-10 2020-06-10 Navigation route display method and three-dimensional scene model generation method and device Pending CN111623782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525784.5A CN111623782A (en) 2020-06-10 2020-06-10 Navigation route display method and three-dimensional scene model generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525784.5A CN111623782A (en) 2020-06-10 2020-06-10 Navigation route display method and three-dimensional scene model generation method and device

Publications (1)

Publication Number Publication Date
CN111623782A true CN111623782A (en) 2020-09-04

Family

ID=72257399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010525784.5A Pending CN111623782A (en) 2020-06-10 2020-06-10 Navigation route display method and three-dimensional scene model generation method and device

Country Status (1)

Country Link
CN (1) CN111623782A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950790A (en) * 2021-02-05 2021-06-11 深圳市慧鲤科技有限公司 Route navigation method, device, electronic equipment and storage medium
CN112965652A (en) * 2021-03-26 2021-06-15 深圳市慧鲤科技有限公司 Information display method and device, electronic equipment and storage medium
CN113739801A (en) * 2021-08-23 2021-12-03 上海明略人工智能(集团)有限公司 Navigation route acquisition method, system, medium and electronic device for sidebar
CN113758486A (en) * 2021-08-20 2021-12-07 阿里巴巴新加坡控股有限公司 Path display method, device and computer program product
CN114061593A (en) * 2020-12-31 2022-02-18 万翼科技有限公司 Navigation method based on building information model and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763656A (en) * 2010-01-28 2010-06-30 北京航空航天大学 Construction and display control method for floor and house division model of three-dimensional urban building
CN107665503A (en) * 2017-08-28 2018-02-06 汕头大学 A kind of method for building more floor three-dimensional maps
CN108090959A (en) * 2017-12-07 2018-05-29 中煤航测遥感集团有限公司 Indoor and outdoor one modeling method and device
CN109840944A (en) * 2017-11-24 2019-06-04 财团法人工业技术研究院 3 D model construction method and its system
CN110672089A (en) * 2019-09-23 2020-01-10 上海功存智能科技有限公司 Method and device for navigation in indoor environment
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763656A (en) * 2010-01-28 2010-06-30 北京航空航天大学 Construction and display control method for floor and house division model of three-dimensional urban building
CN107665503A (en) * 2017-08-28 2018-02-06 汕头大学 A kind of method for building more floor three-dimensional maps
CN109840944A (en) * 2017-11-24 2019-06-04 财团法人工业技术研究院 3 D model construction method and its system
CN108090959A (en) * 2017-12-07 2018-05-29 中煤航测遥感集团有限公司 Indoor and outdoor one modeling method and device
CN110672089A (en) * 2019-09-23 2020-01-10 上海功存智能科技有限公司 Method and device for navigation in indoor environment
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061593A (en) * 2020-12-31 2022-02-18 万翼科技有限公司 Navigation method based on building information model and related device
CN114061593B (en) * 2020-12-31 2024-03-12 深圳市万翼数字技术有限公司 Navigation method and related device based on building information model
CN112950790A (en) * 2021-02-05 2021-06-11 深圳市慧鲤科技有限公司 Route navigation method, device, electronic equipment and storage medium
CN112965652A (en) * 2021-03-26 2021-06-15 深圳市慧鲤科技有限公司 Information display method and device, electronic equipment and storage medium
CN113758486A (en) * 2021-08-20 2021-12-07 阿里巴巴新加坡控股有限公司 Path display method, device and computer program product
CN113739801A (en) * 2021-08-23 2021-12-03 上海明略人工智能(集团)有限公司 Navigation route acquisition method, system, medium and electronic device for sidebar

Similar Documents

Publication Publication Date Title
CN111551188B (en) Navigation route generation method and device
CN111623782A (en) Navigation route display method and three-dimensional scene model generation method and device
US10937249B2 (en) Systems and methods for anchoring virtual objects to physical locations
US9443353B2 (en) Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
JP5776201B2 (en) Information processing apparatus, information sharing method, program, and terminal apparatus
CN105493154B (en) System and method for determining the range of the plane in augmented reality environment
CN107430686A (en) Mass-rent for the zone profiles of positioning of mobile equipment creates and renewal
KR101533320B1 (en) Apparatus for acquiring 3 dimension object information without pointer
EP3190581B1 (en) Interior map establishment device and method using cloud point
CN110136200A (en) Electronic equipment positioning based on image
KR101181967B1 (en) 3D street view system using identification information.
KR101867020B1 (en) Method and apparatus for implementing augmented reality for museum
CN112950790A (en) Route navigation method, device, electronic equipment and storage medium
CN113178006A (en) Navigation map generation method and device, computer equipment and storage medium
JP5469764B1 (en) Building display device, building display system, building display method, and building display program
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN111651052A (en) Virtual sand table display method and device, electronic equipment and storage medium
CN111640235A (en) Queuing information display method and device
JP2014203175A (en) Information processing device, information processing method, and program
CN112288881B (en) Image display method and device, computer equipment and storage medium
CN113345108A (en) Augmented reality data display method and device, electronic equipment and storage medium
CN111639975A (en) Information pushing method and device
KR101317869B1 (en) Device for creating mesh-data, method thereof, server for guide service and smart device
TW202119228A (en) Interactive method and system based on optical communication device
CN108235764B (en) Information processing method and device, cloud processing equipment and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200904

RJ01 Rejection of invention patent application after publication