CN110686666A - Navigation method and terminal equipment - Google Patents

Navigation method and terminal equipment Download PDF

Info

Publication number
CN110686666A
CN110686666A CN201910936847.3A CN201910936847A CN110686666A CN 110686666 A CN110686666 A CN 110686666A CN 201910936847 A CN201910936847 A CN 201910936847A CN 110686666 A CN110686666 A CN 110686666A
Authority
CN
China
Prior art keywords
information
route
target
user
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910936847.3A
Other languages
Chinese (zh)
Inventor
龚烜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910936847.3A priority Critical patent/CN110686666A/en
Publication of CN110686666A publication Critical patent/CN110686666A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The embodiment of the invention provides a navigation method and terminal equipment, relates to the technical field of communication, and aims to solve the problem that the existing terminal equipment cannot accurately navigate in a scene inside a building. The method comprises the following steps: generating at least one piece of route information according to first information and second information, wherein the first information comprises internal passage information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position; guiding the user to reach the target position from the initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in the at least one route information. The method can be applied to the navigation scene of the terminal equipment.

Description

Navigation method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a navigation method and terminal equipment.
Background
With the continuous development of terminal technology, the application of terminal equipment is more and more extensive. The navigation function in the terminal equipment provides convenience for the user to go out.
At present, a user can use map software or navigation software in terminal equipment to conveniently achieve the purposes of positioning, navigation and the like. Specifically, the terminal device may determine a real-time location of the terminal device through a Global Positioning System (GPS), and support line inquiry and real-time navigation of the user in walking, public transportation, or self-driving scenes.
However, for environments with shelters in shopping malls, office buildings, underground garages and the like, the navigation of the terminal device in the internal scenes of the buildings is not accurate enough due to the fact that the positioning is not accurate enough through the GPS.
Disclosure of Invention
The embodiment of the invention provides a navigation method and terminal equipment, and aims to solve the problem that the navigation of the existing terminal equipment in a scene inside a building is not accurate enough.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides a navigation method, which is applied to a terminal device, and the method includes: generating at least one piece of route information according to first information and second information, wherein the first information comprises internal passage information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position; guiding the user to reach the target position from the initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in the at least one route information.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a processing module and a navigation module. The system comprises a processing module and a display module, wherein the processing module is used for generating at least one piece of route information according to first information and second information, the first information comprises internal passage information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position. The navigation module is used for guiding a user to arrive at a target position from an initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one of the at least one piece of route information generated by the processing module.
In a third aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and the computer program, when executed by the processor, implements the steps of the navigation method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the navigation method in the first aspect.
In an embodiment of the present invention, at least one piece of route information may be generated based on first information and second information, the first information including inside passage information of a target building, the second information indicating a start position and a destination position located inside the target building, each of the at least one piece of route information indicating a route from the start position to the destination position; guiding the user to reach the target position from the initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in the at least one route information. By the scheme, when navigation is performed in building scenes such as shopping malls, residential buildings, basements and the like with low positioning accuracy, a navigation route generated aiming at a building internal channel, a user starting position and a destination position in the building can be adopted, and the user is guided to reach the destination position from the starting position through voice interaction. The navigation route is an internal navigation route generated aiming at a special scene, so that a user can be guided to accurately find a target position according to the navigation route, the navigation accuracy of the terminal equipment can be improved, and the problem that a specific destination is difficult to find due to inaccurate positioning in a sheltered scene and a poor signal scene in the conventional walking navigation can be solved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a navigation method according to an embodiment of the present invention;
FIG. 3 is a second schematic diagram of a navigation method according to an embodiment of the present invention;
fig. 4 is one of schematic interfaces of an application of the navigation method according to the embodiment of the present invention;
FIG. 5 is a second schematic interface diagram of an application of the navigation method according to the embodiment of the present invention;
FIG. 6 is a third schematic diagram illustrating a navigation method according to an embodiment of the present invention;
FIG. 7 is a fourth schematic diagram illustrating a navigation method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 9 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims herein are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first information, the second information, and the like are for distinguishing different information, not for describing a specific order of information.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units, or the like; plural elements means two or more elements, and the like.
The embodiment of the invention provides a navigation method and terminal equipment, which can generate at least one piece of route information according to first information and second information, wherein the first information comprises internal channel information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position; guiding the user to reach the target position from the initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in the at least one route information. By the scheme, when navigation is performed in building scenes such as shopping malls, residential buildings, basements and the like with low positioning accuracy, a navigation route generated aiming at a building internal channel, a user starting position and a destination position in the building can be adopted, and the user is guided to reach the destination position from the starting position through voice interaction. The navigation route is an internal navigation route generated aiming at a special scene, so that a user can be guided to accurately find a target position according to the navigation route, the navigation accuracy of the terminal equipment can be improved, and the problem that a specific destination is difficult to find due to inaccurate positioning in a sheltered scene and a poor signal scene in the conventional walking navigation can be solved.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the navigation method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the navigation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the navigation method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the navigation method provided by the embodiment of the invention by running the software program in the android operating system.
The terminal equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiment of the present invention is not limited in particular.
The execution subject of the navigation method provided by the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the navigation method in the terminal device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain the navigation method provided by the embodiment of the present invention.
The following describes an exemplary navigation method provided by an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides a navigation method, which may include steps 200 to 201 described below.
200, generating at least one piece of route information by the terminal equipment according to the first information and the second information; the first information includes interior passage information of a target building, and the second information indicates a start position and a destination position located inside the target building.
Wherein each of the at least one piece of route information may be used to indicate a route from the starting location to the destination location.
In the embodiment of the invention, when the user is in a scene with poor signals (for example, a scene with shelters such as a shopping mall, an office building and an underground garage, which is called a special scene), the GPS positioning in the scene has deviation, so that the positioning of the terminal equipment of the user is inaccurate, and the navigation effect is poor. In order to solve the problem that the navigation of the terminal equipment is not accurate enough in the sheltered scenes of a shopping mall, an office building, an underground garage and the like, when the terminal equipment is used for navigating in the building scenes of the shopping mall, the residential building, the basement and the like with lower positioning accuracy, the terminal equipment can generate a navigation route aiming at an internal passage of the building, a user initial position and a target position positioned in the building, and further guide the user to reach the target position from the initial position.
Optionally, in a possible implementation manner, the terminal device may generate at least one piece of route information in advance according to the first information and the second information, and store the at least one piece of route information. Then, when the user needs to navigate, the terminal device can call out at least one piece of route information corresponding to a route from the starting position to the destination position from the pre-stored route information according to the starting position and the destination position as long as the starting position and the destination position are determined.
In the embodiment of the present invention, the terminal device may store at least one piece of route information in advance for a special scene, and each piece of route information may be used to indicate a navigation route between two certain locations (e.g., an entrance location and a shop location) in the special scene. For example, each piece of the at least one piece of route information may be stored in the form of: "entry position-destination position: movement route information ", wherein the movement route information may include a plurality of pieces of sub-route information.
For example, after the user enters the first floor of the mall, the terminal device determines that the user has entered the mall through the navigation system, the voice assistant is invoked, after the user selects the open assist mode, the terminal device locates the approximate location of the user near the south door of the mall (i.e. the entrance location) through the navigation system, and then the terminal device may retrieve "entrance location — destination location" from the stored at least one piece of navigation route information according to the entrance location and the destination (i.e. the destination location) input by the user: movement route information ".
Therefore, once the user is in a shielded special scene and starts the auxiliary navigation function, the terminal equipment can call out target route information from at least one piece of prestored route information and provides navigation prompts for the user through voice interaction, and therefore the navigation accuracy of the terminal equipment can be improved.
Optionally, in another possible implementation manner, when the user needs to navigate, the terminal device may first obtain the first information and the second information (i.e., the starting location information and the destination location information), and then generate at least one piece of route information according to the first information and the second information.
Alternatively, in the embodiment of the present invention, the starting location (i.e., the location where the terminal device is currently located) may be inside the target building, or may be near the target building and ready to enter the target building. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, the start position information may be determined after a user inputs the start position on the terminal device (for example, selects the start position on an electronic map, or inputs a start position name in an input box), or may be determined after positioning by a GPS, and may specifically be determined according to an actual use requirement, which is not limited in the embodiment of the present invention.
Optionally, in the embodiment of the present invention, the destination location information may be determined after the user selects a destination location on an electronic map of the terminal device, or may be determined after the user inputs a destination location name in an input box, and may specifically be determined according to an actual use requirement, which is not limited in the embodiment of the present invention.
Step 201, guiding a user to reach a target position from an initial position by terminal equipment based on target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in at least one route information.
In the embodiment of the present invention, after the terminal device generates at least one piece of route information, a corresponding target voice message may be generated based on one piece of route information (i.e., target route information) in the at least one piece of route information, and the target voice message may be output; further, the terminal device may guide the user from the start position to the destination position based on the target voice information.
Optionally, in an embodiment of the present invention, the target route indicated by the target route information may include a plurality of sub-routes, that is, a plurality of continuous sub-routes, and each sub-route may include a start node and a destination node. It will be appreciated that for two sub-routes of the consecutive sub-routes, the destination node of the previous sub-route may be the starting node of the next route. And, the target voice information includes at least one voice information, each voice information being for indicating one sub-route of a plurality of sub-routes of the target route. Based on this, the above step 201 can be realized by the following step 201 a.
Step 201a, the terminal device guides the user to reach the first destination node from the first starting node based on the first voice information.
The first voice message may be one of the at least one voice message, and the first voice message is used to indicate a first sub-route, where the first sub-route includes a first start node and a first destination node.
In the embodiment of the present invention, since the target route includes a plurality of sub-routes, and therefore the target voice information includes a plurality of pieces of voice information corresponding to the plurality of sub-routes, the terminal device may guide the user to reach the destination node from the start node of one sub-route and then reach the destination node from the start node of the next sub-route, based on the plurality of pieces of voice information, in this way, guide the user to reach the destination position from the start position.
Optionally, in the embodiment of the present invention, the step 201a may be implemented by the following steps 201b to 201 d.
And step 201b, the terminal equipment receives second voice information input by the user.
The second voice information is the information input by voice under the condition that the user is at the first starting node.
Step 201c, the terminal device determines a first sub-route from the plurality of sub-routes of the target route in response to the second voice information.
Wherein the first sub-route is a route from the first start node to the first destination node.
Step 201d, the terminal device outputs the first voice message corresponding to the first sub-route, and guides the user to reach the first destination node from the first start node.
Specifically, after the user enters the above special scene (taking a target mall as an example below), the terminal device may enable an auxiliary navigation function of the navigation application program, and perform an auxiliary navigation operation, that is, after the terminal device and the user perform voice interaction, after the user inputs the second voice information (used for indicating the first start node) by voice, the terminal device may receive the second voice information, and may determine the first sub-route from the plurality of sub-routes of the called target route according to the second voice information.
It should be noted that, in the case of enabling the above-mentioned auxiliary navigation function, navigation is started through voice interaction, and at this time, the GPS positioning function and the navigation function may be temporarily disabled.
Optionally, in this embodiment of the present invention, after receiving the second voice information input by the user through voice, the terminal device may first determine whether the location key information (for example, the location keyword) in the second voice information is included in the preset key information, and if the terminal device recognizes that the location key information in the second voice information is included in the preset key information, the terminal device may determine the first sub-route from the multiple sub-routes of the target route according to the location key information. The preset key information may include a plurality of location keywords, which are respectively used for indicating each location node in the target route.
Optionally, in the embodiment of the present invention, the step 201c may be specifically implemented by the following steps 201e to 201 g.
Step 201e, the terminal device determines a first start node from the target route according to the second voice information.
Wherein, assuming that the second voice information is used to indicate the first start node, the terminal device may determine, from the target route, a location node indicated by the second voice information (i.e., the first start node).
Optionally, in this embodiment of the present invention, the second speech information may be information that is input by a speech when the user is at the first start node.
Step 201f, the terminal device determines a first destination node from the target route according to the first starting node.
The first destination node may be a next location node adjacent to the first start node in the target route.
Step 201g, the terminal device determines a first sub-route from the plurality of sub-routes of the target route according to the first starting node and the first destination node.
It will be appreciated that the first sub-route described above is a route from the first origin node to the first destination node.
It should be noted that, the above is described by taking an example of determining one sub-route from multiple sub-routes through one voice interaction between a user and a terminal device, and it can be understood that, for multiple sub-routes of a target route, in the embodiment of the present invention, each sub-route in the multiple sub-routes may be sequentially determined through multiple voice interactions between the user and the terminal device, so as to guide the user to reach the target location from a starting location.
In the embodiment of the present invention, in a case that the navigation application is running, the user may perform voice interaction with the terminal device, and after the terminal device receives second voice information input by the user through voice, the terminal device may determine a first sub-route from the plurality of sub-routes of the target route according to the second voice information, and may then output information (i.e., the second information) indicating the first route through voice. In this way, the user may follow the first sub-route. Therefore, according to the embodiment of the invention, each sub-route in the plurality of sub-routes can be determined in sequence through a plurality of voice interactions between the user and the terminal equipment, so as to guide the user to reach the destination position from the starting position.
The following describes, by way of example, a possible implementation manner of the navigation method provided by the embodiment of the present invention.
Illustratively, assume that the target route includes three location nodes: the first location node (corresponding to the first location, i.e. the start location), the second location node (corresponding to the second location), and the third location node (corresponding to the third location, i.e. the destination location), the route from the first location node to the second location node is the first sub-route, and the route from the second location node to the third location node is the second sub-route.
Then, when the user is at the first location, if the user inputs information (i.e. the first information) indicating the first location node by voice, the terminal device may determine the first location node from the target route according to the first information; then according to the first position node, determining a position node (namely a second position node) next to the first position node from the target route; determining a first sub-route from the plurality of sub-routes according to the first position node and the second position node (at the moment, the first sub-route is the first sub-route); the terminal device then speech-outputs information indicating the first sub-route (i.e., the above-described second information). In this manner, the user may travel along the second sub-route to reach the second location.
Further, after the user arrives at the second location, if the user inputs information (i.e., the first information) indicating the second location node by voice, the terminal device may determine the second location node from the target route according to the first information; then according to the second position node, determining a next position node (namely a third position node) of the second position node from the target route; determining a second sub-route from the plurality of sub-routes according to the second position node and the third position node (at the moment, the first sub-route is the second sub-route); the terminal device then speech-outputs information indicating the second sub-route (i.e., the above-described second information). As such, the user may travel along the second sub-route to reach the third location.
In the embodiment of the invention, each time a user arrives at a position, the user feeds back position information to the terminal equipment, the terminal equipment determines a sub-route from the target route according to the position information, the sub-route is output by voice, and the user can travel along the sub-route. Therefore, the embodiment of the invention sequentially determines the first sub-route and the second sub-route in the two sub-routes of the target route through multiple voice interactions between the user and the terminal equipment, thereby guiding the user to reach the target position from the starting position.
In the embodiment of the present invention, in the case that the navigation application is running, after the user inputs the destination location information on the running interface of the navigation application (so that the terminal device can obtain the destination location information), the navigation can be started. When the electronic map and/or the camera detect that the user enters a market, a residential building, an office building and other scenes and the destination position does not arrive yet, the artificial intelligent voice assistant can be called. At this time, the user may select to turn on the auxiliary navigation function (or the terminal device may automatically turn on the auxiliary navigation function), and the terminal device may navigate to locate the position where the user is currently located (or the terminal device may receive the start position information input by the user) (so that the terminal device may obtain the start position information). Further, the terminal device may obtain the target route information from at least one pre-stored route information according to the start position information and the target position information. Furthermore, the user can perform voice interaction with the voice assistant by adopting user speech technology, and the terminal device can output navigation information by voice based on the target route information, so that the user can find the target position under the help of artificial intelligent voice.
According to the navigation method provided by the embodiment of the invention, at least one piece of route information can be generated according to first information and second information, wherein the first information comprises internal passage information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position; guiding the user to reach the target position from the initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in the at least one route information. By the scheme, when navigation is performed in building scenes such as shopping malls, residential buildings, basements and the like with low positioning accuracy, a navigation route generated aiming at internal channels of the building can be adopted to guide a user to reach a target position from an initial position through voice interaction. The navigation route is an accurate navigation route generated according to the building internal passage, the user starting position and the target position in the building, so that the user can be guided to accurately find the target position according to the navigation route, and the navigation accuracy of the terminal equipment can be improved.
It should be noted that the navigation method provided by the embodiment of the present invention may be applicable to a walking navigation scenario, a driving navigation scenario, and any other scenario meeting actual use requirements, and may be specifically determined according to actual use requirements, which is not limited in the embodiment of the present invention.
Optionally, in this embodiment of the present invention, the first information may further include a destination name and destination floor information. The target name may be the name of the target building or may be the name of a merchant within the target building. The target floor information is used for indicating a target floor in the target building;
illustratively, in conjunction with fig. 2, as shown in fig. 3, the above step 200 may be specifically defined by the following steps 200a to 200 d. It should be noted that the user described below may be a merchant user, or may be a general user different from the merchant user.
Step 200a, under the condition that the navigation application program is operated, the terminal equipment receives a first input of a user.
Wherein the first input may be for inputting a destination name and destination floor information.
Optionally, in this embodiment of the present invention, the first input of the user is an input (for example, a text input or a number input) of a merchant user on an operation interface (for example, an interface after the navigation mark function is turned on) of the navigation application, or may be an input of any other map interface for calling out a target floor of a target building, which may be determined specifically according to an actual use requirement, which is not limited in this embodiment of the present invention.
And 200b, the terminal device responds to the first input and displays a target map interface containing a target floor corresponding to the target floor information.
The target map interface comprises a target floor corresponding to the target floor information and other floors of a target building, and an initial position and a target position can be input on the target map interface.
And 200c, the terminal equipment receives a second input of the user on the target map interface.
Wherein the second input may be used to select a starting location and a destination location.
Optionally, in this embodiment of the present invention, the second input may be a click input (for example, a single click input or a double click input) on the target map interface, or may be a long-press input, or may also be any other input meeting an actual use requirement, which may be determined specifically according to the actual use requirement, which is not limited in this embodiment of the present invention.
And 200d, the terminal equipment responds to the second input and generates at least one piece of route information according to the starting position information, the destination position information and the internal channel information of the target building.
The start position information may be used to indicate a start position, and the destination position information may be used to indicate a destination position.
Specifically, each piece of route information may include entry position information indicating a start position of the navigation route, destination position information indicating a destination position of the navigation route, and route information from the start position to the destination position.
In the embodiment of the invention, in a market scene, a merchant user can call a map interface of a floor by inputting a building name and floor information, and selects a target position in the map interface; accordingly, the terminal device may generate at least one piece of navigation route information for indicating a navigation route from the respective entry to the destination location. The embodiment of the invention solves the problem of inaccurate positioning of the current walking navigation scene under the conditions of untimely updating of an electronic map, sheltered environment and poor signal environment by supporting a method of merchant mark positioning and artificial intelligence voice interaction.
The specific implementation of the above steps 200 a-200 d is described in detail below.
(1): when the merchant user judges that the positioning result of the shop by the navigation system is not accurate, the merchant user can start the marking function of the navigation system and select the name of the building where the merchant user is located and the corresponding floor (for example, the fourth floor).
(2): the terminal device searches a floor map (such as a four-floor map) corresponding to the building according to the selection input of the merchant user. If the floor map in the database is not accurate, the merchant can also select to upload a self-defined floor map.
(3): the merchant user marks the location of the merchant on the target map interface, such as marker a, as shown in fig. 4.
(4): the terminal equipment generates a plurality of pieces of moving route information capable of reaching the target position according to the floor map layout and the target position selected by the merchant user, and stores the entrance position-the target position: movement route information ".
The terminal device can respectively find the entrance positions and the optimal moving route of each entrance position to the destination position by using an image recognition algorithm and a path optimization algorithm.
As shown in fig. 5, first, the entrance positions (e.g., south door, west door) that can reach the fourth floor are locked, then the destination positions marked by the user (e.g., position a) are determined, the shortest route of each entrance position reaching the destination position is calculated, and at least one piece of navigation route information is generated.
In the embodiment of the invention, a merchant user interacts with a navigation system, and generates the navigation route from each entrance to each shop in a building aiming at different shops in each floor of the building, so that a common user can call the navigation route after entering the building, and accurate navigation information is provided by the navigation method.
The navigation method provided by the embodiment of the invention can realize the voice interaction between the merchant user and the navigation system, the common user and the navigation system, the position information is marked by self-definition, and the problem of inaccurate navigation caused by untimely updating of the electronic map is solved in time. The following describes, by way of example, a possible implementation manner of the navigation method provided by the embodiment of the present invention.
Figure BDA0002221808090000081
TABLE 1
The embodiment of the invention is used for navigating through voice interaction, and relates to the concepts of slot positions and general dictionaries in the voice interaction technology, which are specifically as follows.
Groove position 1: { site }:
a. the necessity: must be provided with
b. Dictionary: xx Room, Elevator, stair, the xth crossing, the xth door, the xth building, the concrete location noun (combine the context to obtain)
c. Expression treatment: ambiguity of intention
A general dictionary: turn on assisted navigation, arrival, { location }
Table 1 shows exemplary representations of user speech and voice assisted speech during voice interaction.
It is understood that the slot content, the general dictionary, the user phonetics and the voice assistant phonetics are only exemplary lists, and may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the current scene can be identified through the electronic map and the camera of the navigation system, when the scene is judged to be the scene with influenced positioning accuracy, such as a market, a residential building, a basement and the like, the artificial intelligent voice is called, and the user is helped to successfully find the destination through interaction with the artificial intelligent voice, so that the navigation accuracy of the terminal equipment can be improved.
In the embodiment of the invention, when navigation is carried out in building scenes such as shopping malls, residential buildings, basements and the like with lower positioning accuracy, a navigation route generated aiming at an internal passage of the building, a user starting position and a target position in the building can be adopted to guide the user to reach the target position from the starting position through voice interaction. The navigation route is an internal navigation route generated aiming at a special scene, so that a user can be guided to accurately find a target position according to the navigation route, the navigation accuracy of the terminal equipment can be improved, and the problem that a specific destination is difficult to find due to inaccurate positioning in a sheltered scene and a poor signal scene in the conventional walking navigation can be solved.
Optionally, in the embodiment of the present invention, as shown in fig. 6 in combination with fig. 2, after the step 200 and before the step 201, the navigation method provided in the embodiment of the present invention may further include the following step 202.
Step 202, the terminal device determines route information corresponding to a route with the shortest distance between the starting position and the destination position in at least one piece of route information as target route information.
Optionally, in the embodiment of the present invention, as shown in fig. 7 with reference to fig. 2, after the step 200 and before the step 201, the navigation method provided in the embodiment of the present invention may further include the following step 203.
In step 203, in the case that a third input of the user for the at least one piece of route information is received, the terminal device determines route information corresponding to the third input as the target route information.
Optionally, in the embodiment of the present invention, the third input for the at least one piece of route information may be a voice input of the user, or may also be a selection input of the user on the terminal device, which may be specifically determined according to an actual use requirement, and the embodiment of the present invention is not limited.
It should be noted that, in the embodiment of the present invention, the terminal device may alternatively perform step 202 and step 203, that is, the terminal device may perform step 202, or the terminal device may perform step 203. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In the embodiment of the invention, when a user is in a special shielded scene, an internal navigation route aiming at the special scene can be obtained firstly, and the internal navigation route is adopted to provide the navigation route for the user through voice interaction.
As shown in fig. 8, an embodiment of the present invention provides a terminal device 700, where the terminal device 700 may include a processing module 701 and a navigation module 702.
A processing module 701, configured to generate at least one piece of route information according to first information and second information, where the first information includes internal passage information of a target building, the second information is used to indicate a starting location and a destination location located inside the target building, and each piece of route information in the at least one piece of route information is used to indicate a route from the starting location to the destination location.
And the navigation module is used for guiding the user to reach the destination position from the starting position based on the target voice information.
The target voice information is voice information output by the terminal device according to target route information, and the target route information is one of at least one piece of route information generated by the processing module 701.
Optionally, in this embodiment of the present invention, the target voice information may include at least one voice message, each voice message is used to indicate one sub-route of multiple sub-routes of the target route, the target route is the route indicated by the target route information, and each sub-route includes a start node and a destination node.
The navigation module is specifically used for guiding a user to arrive at a first destination node from a first starting node based on the first voice information; the first voice message is one of the at least one voice message, and the first voice message is used for indicating a first sub-route, and the first sub-route comprises a first starting node and a first destination node.
Optionally, in this embodiment of the present invention, the navigation module 702 may specifically include a first receiving sub-module 7021, a first processing sub-module 7022, and an output sub-module 7023.
The first receiving submodule 7021 is configured to receive second voice information input by the user, where the second voice information is information input by the user when the user is located at the first start node;
the first processing submodule 7022 is configured to determine, in response to the second voice message received by the first receiving submodule, a first sub-route from the plurality of sub-routes of the target route, where the first sub-route is a route from the first start node to the first destination node.
And the output sub-module 7023 is configured to output the first voice message corresponding to the first sub-route determined by the processing sub-module, and guide the user to reach the first destination node from the first start node.
Optionally, in this embodiment of the present invention, the first information may further include a target name and target floor information, where the target name is a name of a target building, and the target floor information is used to indicate a target floor in the target building. The processing module 701 may specifically include a second receiving sub-module 7011, a display sub-module 7012, and a second processing sub-module 7013.
A second receiving submodule 7011, configured to receive a first input of the user when the navigation application is running, where the first input is an input for a target name and target floor information;
the display sub-module 7012 is configured to, in response to the first input received by the second receiving sub-module, display a target map interface including a target floor corresponding to the target floor information.
The second receiving sub-module 7011 is further configured to receive a second input from the user on the target map interface displayed by the display sub-module, where the second input is used to select the starting location and the destination location.
The second processing sub-module 7013 is configured to, in response to a second input received by the second receiving sub-module, generate the at least one piece of route information according to the start position information, the destination position information, and the internal channel information of the target building.
Wherein the start position information is used for indicating the start position, and the destination position information is used for indicating the destination position.
Optionally, in this embodiment of the present invention, after generating at least one piece of route information, before the navigation module 702 guides the user to reach the destination location from the start location based on the target voice information, the processing module 701 is further configured to determine, as the target route information, route information corresponding to a route in which a distance between the start location and the destination location is shortest in the at least one piece of route information;
optionally, in this embodiment of the present invention, after the at least one piece of route information is generated, before the navigation module 702 guides the user to reach the destination location from the start location based on the target voice information, and in a case that a third input of the user for the at least one piece of route information is received, the processing module 701 is further configured to determine, as the target route information, the route information corresponding to the third input.
The terminal device provided by the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and is not described here again to avoid repetition.
The terminal device provided by the embodiment of the invention can generate at least one piece of route information according to first information and second information, wherein the first information comprises internal channel information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position; guiding the user to reach the target position from the initial position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one route information in the at least one route information. By the scheme, when navigation is performed in building scenes such as shopping malls, residential buildings, basements and the like with low positioning accuracy, a navigation route generated aiming at a building internal channel, a user starting position and a destination position in the building can be adopted, and the user is guided to reach the destination position from the starting position through voice interaction. The navigation route is an internal navigation route generated aiming at a special scene, so that a user can be guided to accurately find a target position according to the navigation route, the navigation accuracy of the terminal equipment can be improved, and the problem that a specific destination is difficult to find due to inaccurate positioning in a sheltered scene and a poor signal scene in the conventional walking navigation can be solved. .
Fig. 9 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention. As shown in fig. 9, the terminal device 800 includes but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 9 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein the processor 810 is configured to generate at least one route information according to a first information and a second information, the first information including an interior passage information of the target building, the second information indicating a start location and a destination location located inside the target building, each route information of the at least one route information indicating a route from the start location to the destination location; an audio output unit 803 for guiding a user from a start position to a destination position based on the target voice information; the target voice information is voice information output by the terminal device according to target route information, and the target route information is one of the at least one piece of route information generated by the processor 810.
The embodiment of the invention provides a terminal device, which can guide a user to arrive at a target position from an initial position by voice interaction by adopting a navigation route generated aiming at an internal passage of a building, a user initial position and the target position in the building when navigating in building scenes such as a market, a residential building, a basement and the like with lower positioning accuracy. The navigation route is an internal navigation route generated aiming at a special scene, so that a user can be guided to accurately find a target position according to the navigation route, the navigation accuracy of the terminal equipment can be improved, and the problem that a specific destination is difficult to find due to inaccurate positioning in a sheltered scene and a poor signal scene in the conventional walking navigation can be solved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The terminal device 800 provides the user with wireless broadband internet access through the network module 802, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the terminal apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the graphics processor 8041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The terminal device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the terminal device 800 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 807 is operable to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 9, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the terminal device, and this is not limited herein.
The interface unit 808 is an interface for connecting an external device to the terminal apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 800 or may be used to transmit data between the terminal apparatus 800 and an external device.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the terminal device, connects various parts of the whole terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby performing overall monitoring of the terminal device. Processor 810 may include one or more processing units; optionally, the processor 810 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
Terminal device 800 may also include a power supply 811 (e.g., a battery) for powering the various components, and optionally, power supply 811 may be logically coupled to processor 810 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the terminal device 800 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes the processor 810 shown in fig. 9, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program, when executed by the processor 810, implements each process of the navigation method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the navigation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method disclosed in the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of navigation, the method comprising:
generating at least one piece of route information according to first information and second information, wherein the first information comprises internal passage information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position;
guiding a user to arrive at the destination position from the starting position based on target voice information;
the target voice information is voice information output by the terminal device according to target route information, and the target route information is one of the at least one piece of route information.
2. The method according to claim 1, wherein the target voice information includes at least one voice information, each voice information indicating one sub-route of a plurality of sub-routes of a target route, the target route being a route indicated by the target route information, each sub-route including a start node and a destination node;
the guiding the user to reach the destination location from the starting location based on the target voice information comprises:
guiding the user to reach the first destination node from the first starting node based on the first voice information;
wherein the first voice message is one of the at least one voice message, the first voice message is used for indicating a first sub-route, and the first sub-route includes the first start node and the first destination node.
3. The method of claim 2, wherein directing the user from the first originating node to the first destination node based on the first voice message comprises:
receiving second voice information input by a user, wherein the second voice information is the information input by the voice under the condition that the user is at a first starting node;
determining the first sub-route from a plurality of sub-routes of the target route in response to the second voice information, the first sub-route being a route from the first originating node to the first destination node;
and outputting first voice information corresponding to the first sub-route to guide a user to reach the first destination node from the first starting node.
4. The method of claim 1, wherein the first information further comprises a target name and target floor information, the target name being a name of the target building, the target floor information indicating a target floor within the target building;
generating at least one piece of route information according to the first information and the second information, including:
receiving a first input of a user under the condition that a navigation application program runs, wherein the first input is input aiming at a target name and target floor information;
responding to the first input, and displaying a target map interface containing a target floor corresponding to the target floor information;
receiving a second input of the user on the target map interface, the second input being used for selecting the starting position and the destination position;
generating the at least one piece of route information according to start position information, destination position information, and internal passage information of the target building in response to the second input;
wherein the start position information is used for indicating the start position, and the destination position information is used for indicating the destination position.
5. The method according to any one of claims 1 to 4, wherein after the generating at least one piece of route information, before the guiding a user from the starting location to the destination location based on the target voice information, the method further comprises:
determining route information corresponding to a route with the shortest distance between the starting position and the destination position in the at least one piece of route information as the target route information;
alternatively, the first and second electrodes may be,
in a case where a third input of the at least one piece of route information by the user is received, determining route information corresponding to the third input as target route information.
6. The terminal equipment is characterized by comprising a processing module and a navigation module;
the processing module is used for generating at least one piece of route information according to first information and second information, wherein the first information comprises internal passage information of a target building, the second information is used for indicating a starting position and a destination position located inside the target building, and each piece of route information in the at least one piece of route information is used for indicating a route from the starting position to the destination position;
the navigation module is used for guiding a user to arrive at the destination position from the starting position based on target voice information;
the target voice information is voice information output by the terminal device according to target route information, and the target route information is one of the at least one piece of route information generated by the processing module.
7. The terminal device according to claim 6, wherein the target voice information includes at least one voice information, each voice information indicating one of a plurality of sub-routes of a target route, the target route being a route indicated by the target route information, each sub-route including a start node and a destination node;
the navigation module is specifically used for guiding a user to arrive at a first destination node from a first starting node based on first voice information; wherein the first voice message is one of the at least one voice message, the first voice message is used for indicating a first sub-route, and the first sub-route includes the first start node and the first destination node.
8. The terminal device according to claim 7, wherein the navigation module specifically includes a first receiving sub-module, a first processing sub-module, and an output sub-module;
the first receiving submodule is used for receiving second voice information input by a user, and the second voice information is the voice input information under the condition that the user is at a first starting node;
the processing submodule is configured to determine, in response to the second voice information received by the first receiving submodule, the first sub-route from a plurality of sub-routes of the target route, where the first sub-route is a route from the first start node to the first destination node;
the first output sub-module is configured to output the first voice message corresponding to the first sub-route determined by the processing sub-module, and guide a user to reach the first destination node from the first start node.
9. The terminal device according to claim 6, wherein the first information further includes a target name and target floor information, the target name being a name of the target building, the target floor information indicating a target floor within the target building; the processing module specifically comprises a second receiving submodule, a display submodule and a second processing submodule;
the second receiving submodule is used for receiving a first input of a user under the condition that a navigation application program runs, wherein the first input is input aiming at a target name and target floor information;
the display submodule is used for responding to the first input received by the second receiving submodule and displaying a target map interface containing a target floor corresponding to the target floor information;
the second receiving submodule is further used for receiving a second input of the user on the target map interface displayed by the display submodule, wherein the second input is used for selecting the starting position and the destination position;
the second processing submodule is configured to generate, in response to the second input received by the second receiving submodule, the at least one piece of route information according to start position information, destination position information, and internal channel information of the target building;
wherein the start position information is used for indicating the start position, and the destination position information is used for indicating the destination position.
10. The terminal device according to any one of claims 6 to 9, wherein the processing module is further configured to, after generating at least one piece of route information, determine route information corresponding to a route in the at least one piece of route information, the route having a shortest distance between the start location and the destination location, as the target route information before the navigation module guides a user to reach the destination location from the start location based on target voice information;
alternatively, the first and second electrodes may be,
the processing module is further configured to, after generating at least one piece of route information, determine, as the target route information, route information corresponding to a third input of the user for the at least one piece of route information if the third input is received before the navigation module guides the user to reach the destination location from the start location based on the target voice information.
11. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the navigation method according to any one of claims 1 to 5.
CN201910936847.3A 2019-09-29 2019-09-29 Navigation method and terminal equipment Pending CN110686666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910936847.3A CN110686666A (en) 2019-09-29 2019-09-29 Navigation method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910936847.3A CN110686666A (en) 2019-09-29 2019-09-29 Navigation method and terminal equipment

Publications (1)

Publication Number Publication Date
CN110686666A true CN110686666A (en) 2020-01-14

Family

ID=69111164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910936847.3A Pending CN110686666A (en) 2019-09-29 2019-09-29 Navigation method and terminal equipment

Country Status (1)

Country Link
CN (1) CN110686666A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857371A (en) * 2020-12-29 2021-05-28 上海企树网络科技有限公司 Navigation two-dimensional code generation method, park navigation method and park navigation device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110137549A1 (en) * 2009-12-09 2011-06-09 Qualcomm Incorporated Method and apparatus for reducing instructions in an indoor navigation environment
CN103822626A (en) * 2014-02-17 2014-05-28 惠州Tcl移动通信有限公司 Mobile terminal, digital map generation method or navigation method thereof and devices
CN105222785A (en) * 2015-09-07 2016-01-06 广东欧珀移动通信有限公司 A kind of route recommendation method in indoor place and user terminal
CN106052696A (en) * 2016-08-17 2016-10-26 西安理工大学 Museum real-time route guidance method based on mobile terminals
CN106169247A (en) * 2016-08-04 2016-11-30 上海交通大学 The garage parking indoor positioning of view-based access control model and map and micro navigation system and method
CN106918334A (en) * 2015-12-25 2017-07-04 高德信息技术有限公司 Indoor navigation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110137549A1 (en) * 2009-12-09 2011-06-09 Qualcomm Incorporated Method and apparatus for reducing instructions in an indoor navigation environment
CN103822626A (en) * 2014-02-17 2014-05-28 惠州Tcl移动通信有限公司 Mobile terminal, digital map generation method or navigation method thereof and devices
CN105222785A (en) * 2015-09-07 2016-01-06 广东欧珀移动通信有限公司 A kind of route recommendation method in indoor place and user terminal
CN106918334A (en) * 2015-12-25 2017-07-04 高德信息技术有限公司 Indoor navigation method and device
CN106169247A (en) * 2016-08-04 2016-11-30 上海交通大学 The garage parking indoor positioning of view-based access control model and map and micro navigation system and method
CN106052696A (en) * 2016-08-17 2016-10-26 西安理工大学 Museum real-time route guidance method based on mobile terminals

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857371A (en) * 2020-12-29 2021-05-28 上海企树网络科技有限公司 Navigation two-dimensional code generation method, park navigation method and park navigation device

Similar Documents

Publication Publication Date Title
JP6312716B2 (en) Method, apparatus and medium for determining a position of a mobile device within an indoor environment
CN108519080B (en) Navigation route planning method and terminal
CN110221737B (en) Icon display method and terminal equipment
CN111142723B (en) Icon moving method and electronic equipment
WO2020253340A1 (en) Navigation method and mobile terminal
CN107846518B (en) Navigation state switching method, mobile terminal and computer readable storage medium
CN108917766B (en) Navigation method and mobile terminal
WO2020063165A1 (en) Navigation method and terminal device
WO2020215991A1 (en) Display control method and terminal device
CN108986528B (en) Driving guide method, information indication method, terminal device and server
CN111026350A (en) Display control method and electronic equipment
WO2019154360A1 (en) Interface switching method and mobile terminal
CN111126995A (en) Payment method and electronic equipment
CN111124245A (en) Control method and electronic equipment
CN111459361B (en) Application icon display method and device and electronic equipment
CN111090489B (en) Information control method and electronic equipment
CN110940339A (en) Navigation method and electronic equipment
CN109067975B (en) Contact person information management method and terminal equipment
CN111158556B (en) Display control method and electronic equipment
CN110686666A (en) Navigation method and terminal equipment
CN110035379B (en) Positioning method and terminal equipment
CN110470293B (en) Navigation method and mobile terminal
CN109388471B (en) Navigation method and device
KR20100050322A (en) Navigation apparatus and method thereof
CN111256678A (en) Navigation method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114