CN109084750B - Navigation method and electronic equipment - Google Patents

Navigation method and electronic equipment Download PDF

Info

Publication number
CN109084750B
CN109084750B CN201811109880.0A CN201811109880A CN109084750B CN 109084750 B CN109084750 B CN 109084750B CN 201811109880 A CN201811109880 A CN 201811109880A CN 109084750 B CN109084750 B CN 109084750B
Authority
CN
China
Prior art keywords
navigation
application
picture
picture content
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811109880.0A
Other languages
Chinese (zh)
Other versions
CN109084750A (en
Inventor
蔡明祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811109880.0A priority Critical patent/CN109084750B/en
Publication of CN109084750A publication Critical patent/CN109084750A/en
Application granted granted Critical
Publication of CN109084750B publication Critical patent/CN109084750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Abstract

The application provides a navigation method and electronic equipment, which can obtain a navigation instruction for a target object in a first application (different from a second application for navigation), and can respond to the instruction to realize navigation for the target object in the first application by obtaining first position information of the target object, starting and operating the second application for navigation, and then utilizing the second application to execute navigation operation of the position indicated by the first position information as a navigation destination, so that a user can trigger the navigation instruction for the target object in other applications except the navigation application in the electronic equipment and realize automatic navigation without manually opening navigation software and inputting a navigation destination, thereby realizing cross-application automatic starting of navigation, and having simple and flexible operation, and effectively improving the intelligentization degree of navigation, the convenience and flexibility of the navigation application are improved.

Description

Navigation method and electronic equipment
Technical Field
The invention belongs to the technical field of intelligent navigation, and particularly relates to a navigation method and electronic equipment.
Background
With the development and popularization of smart phones, tablet computers, and various other small portable/wearable smart terminals, the applications of terminal devices have penetrated into various aspects of people's lives, such as communication, work, entertainment, games, shopping, navigation, and so on.
As one aspect of terminal applications, navigation (e.g., mobile phone navigation) is increasingly used, which brings convenience to people's daily trips, and currently, when a terminal device performs navigation, a user generally needs to manually open navigation software and input destination information in a corresponding input box of a software interface, so as to implement navigation from a current position of the user to a destination. The navigation mode obviously has the problems of low intelligent degree of navigation and poor convenience and flexibility of application.
Disclosure of Invention
In view of this, the present invention provides a navigation method and an electronic device, which are used to improve the intelligent degree of navigation and improve the convenience and flexibility of navigation applications.
Therefore, the invention discloses the following technical scheme:
a navigation method, comprising:
obtaining a navigation instruction for a target object in a first application;
obtaining first position information of the target object;
launching and running a second application for navigation, the first application distinct from the second application;
performing, with the second application, a navigation operation with a navigation destination being the location indicated by the first location information.
The above method, preferably, the obtaining of the navigation instruction for the target object in the first application includes:
detecting operation information when an operation body performs a predetermined operation on a target object in the first application; the predetermined operation is used for triggering navigation;
and generating a navigation instruction based on the operation information.
In the above method, preferably, the target object is any one or a combination of a text, a picture or a video in the first application;
if the target object is a target picture in the first application, the obtaining of the first position information of the target object includes:
recognizing and extracting first picture content with position attributes and/or second picture content carrying position data in the target picture based on an Optical Character Recognition (OCR) technology;
and determining the first picture content and/or the second picture content as first position information of the target picture.
The method preferably, the performing, by using the second application, a navigation operation whose navigation destination is the location indicated by the location information includes:
submitting the first location information to the second application;
obtaining second position information of a navigation starting position based on a positioning technology in the second application;
and navigating in the second application based on the second position information of the navigation starting position and the first position information used for indicating a navigation destination.
The above method, preferably, the navigating in the second application based on the second location information of the navigation start location and the first location information for indicating the navigation destination includes:
if the first position information only comprises the first picture content, determining a first position attribute value of the first picture content, which is matched with second position information of the navigation starting position, wherein the first position attribute value is position information; navigating in the second application based on the second position information of the navigation starting position and the first position attribute value of the first picture content;
and if the first position information comprises the second picture content, navigating in the second application based on the second position information of the navigation starting position and the position data carried in the second picture content.
The above method, preferably, the navigating in the second application based on the second location information of the navigation start location and the first location information for indicating the navigation destination includes:
if the first position information comprises the first picture content and the second picture content, judging whether position data carried by the first picture content and the second picture content are matched;
if the image content is matched with the first image content, navigating in the second application based on second position information of the navigation starting position and the position data carried in the second image content;
if not, determining second position attribute value of the first picture content, which is matched with second position information of the navigation starting position or position data carried by the second picture content, wherein the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
An electronic device comprising at least two applications capable of running on the electronic device, further comprising:
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining a navigation instruction for a target object in a first application;
obtaining first position information of the target object;
launching and running a second application for navigation, the first application distinct from the second application;
performing, with the second application, a navigation operation with a navigation destination being the location indicated by the first location information.
In the above electronic device, preferably, the target object is any one or a combination of a text, a picture or a video in the first application;
if the target object is a target picture in the first application, the processor obtains first position information of the target object, and specifically includes:
identifying and extracting first picture content with position attributes and/or second picture content carrying position data in the target picture based on an OCR technology;
and determining the first picture content and/or the second picture content as first position information of the target picture.
Preferably, in the electronic device, the processor executes a navigation operation with a navigation destination being a location indicated by the location information by using the second application, and specifically includes:
submitting the first location information to the second application;
obtaining second position information of a navigation starting position based on a positioning technology in the second application;
and navigating in the second application based on the second position information of the navigation starting position and the first position information used for indicating a navigation destination.
An electronic device comprising at least two applications capable of running on the electronic device, further comprising:
a first acquisition unit for acquiring a navigation instruction for a target object in a first application;
a second acquisition unit configured to acquire first position information of the target object;
the starting unit is used for starting and running a second application for navigation, and the first application is different from the second application;
and the navigation unit is used for executing navigation operation of the position indicated by the first position information by using the second application.
According to the above scheme, the navigation method and the electronic device provided by the application can obtain the navigation instruction for the target object in the first application (different from the second application for navigation), and can respond to the navigation instruction and realize navigation for the target object in the first application by obtaining the first position information of the target object, starting and running the second application for navigation, and further using the second application to execute the navigation operation of the position indicated by the first position information as the navigation destination, so that by applying the scheme of the application, a user can trigger the navigation instruction for the target object in other applications except the navigation application in the electronic device and realize automatic navigation, without manually opening navigation software and inputting the navigation destination, thereby realizing cross-application automatic start of navigation, and having simple and flexible operation, so that the application effectively improves the intelligent degree of navigation, the convenience and flexibility of the navigation application are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flowchart of a first embodiment of a navigation method provided in the present application;
fig. 2 is a flowchart of a second embodiment of a navigation method provided in the present application;
FIG. 3 is a flowchart of a third embodiment of a navigation method provided by the present application;
FIG. 4 is a flowchart of a fourth embodiment of a navigation method provided in the present application;
fig. 5 is a schematic structural diagram of a fifth embodiment of an electronic device provided in the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment nine provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the intelligent degree of navigation and improve the convenience and flexibility of navigation application, the application provides a navigation method and an electronic device, and the navigation method and the electronic device of the application are described through a plurality of embodiments.
Referring to fig. 1, a flowchart of a first embodiment of a navigation method provided in the present application is shown, where the navigation method is applicable to an electronic device.
The electronic device may be, but is not limited to, a portable mobile terminal such as a smart phone, a tablet computer, a Personal Digital Assistant (PDA), or can be a wearable intelligent terminal such as an intelligent bracelet, an intelligent watch and the like, the electronic equipment has a navigation function, the electronic equipment comprises at least two applications capable of running on the electronic equipment, one of the applications is used for providing the navigation function, and the other at least one application may be any one or more of, but not limited to, an application for providing a communication function (e.g., WeChat, QQ, mailbox, etc.), an application for providing a photographing/shooting function (e.g., camera), an application for providing an image/video storage and management function (e.g., photo album), an application for providing a schedule and memo function (e.g., notepad), an application for providing a web function (e.g., browser), and the like.
As shown in fig. 1, the navigation method of the present embodiment includes the following steps:
step 101, obtaining a navigation instruction for a target object in a first application.
The first application is different from the second application for navigation, that is, the first application is another application installed on the electronic device, such as but not limited to the application for providing a communication function, the application for providing a photographing/shooting function, or the application for providing an image/video storage and management function, etc. as described above.
The target object may be any one or combination of a text, a picture, or a video in the first application, and may be, for example, a picture, a small video, or a text shared by friends in a communication application such as WeChat and QQ, a picture, a small video, or a text from a network, a text in a mail or a notepad, a picture in an album, a small video, or a picture, a small video, or the like taken by a user in real time with a camera.
In practical applications, a navigation instruction for a target object required in a first application of the electronic device may be triggered by a user by performing a corresponding operation on the first application.
And 102, obtaining first position information of the target object.
The main objective of the present application is to automatically start navigation directly based on the target object in the first application (which is different from the prior art in which navigation software needs to be manually opened by a user and destination information needs to be input in a corresponding input box of a software interface for navigation), and therefore, the target object needs to carry corresponding location information.
The target object carries corresponding location information, which may specifically be that the target object carries data content capable of directly embodying the location information thereof, for example, a certain picture or a small video (which may be from a friend to share or a user album) directly carries address information positioned by a friend/user when taking a picture, and a certain text has related text content describing the address information; the target object carries corresponding location information, or the target object carries data content capable of indirectly representing the location information, for example, a certain picture or a small video includes a merchant logo pattern, a restaurant name and the like, and the merchant logo pattern, the restaurant name and the like can be matched with corresponding location information based on map information, so that the merchant logo pattern, the restaurant name and the like can be used as data content capable of indirectly representing the location information.
Certainly, in practical applications, the target object may also carry data content capable of directly representing the position information of the target object and data content capable of indirectly representing the position information of the target object, for example, a certain image in the album includes not only a logo pattern of a merchant but also position information located when the user shoots.
In view of this, in this step 102, the first location information corresponding to the target object may be determined by specifically identifying the data content that can directly represent the location information of the target object and/or the data content that can indirectly represent the location information of the target object. For example, address information (address information located when the picture is taken) carried in the picture is identified and extracted as first location information corresponding to the picture, or a merchant logo pattern or restaurant name included in the picture is identified and matched with location information based on map information, and the like.
Step 103, starting and running a second application for navigation.
On the basis of the above steps, in this step, the electronic device automatically starts and runs the second application for navigation in response to the navigation instruction triggered by the target object in the first application.
And 104, executing navigation operation of the position indicated by the first position information by using the second application.
And in the second application which is automatically started and operated, the position indicated by the first position information of the target object is used as a navigation destination to navigate based on the map information.
For example, if a WeChat friend of the user shares a restaurant picture with a positioning address, after the user performs a predetermined operation in the WeChat to trigger a navigation instruction for the picture, the electronic device may identify and extract address information in the picture, automatically start and run a second application for navigation, and then navigate the user using the extracted address information of the picture as a destination using the second application. From the perspective of a user, the navigation method can realize automatic navigation by directly executing a navigation operation aiming at a target object in the current application without manually opening and entering navigation software or inputting destination position information, realizes cross-application automatic start of navigation, and is simple and flexible to operate.
According to the above solutions, the navigation method provided in this embodiment may obtain a navigation instruction for a target object in a first application (different from a second application for navigation), and may implement navigation for the target object in the first application by obtaining first location information of the target object, starting and running the second application for navigation, and then performing a navigation operation with a navigation destination being a location indicated by the first location information by using the second application, so as to implement navigation for the target object in the first application in response to the navigation instruction, thereby, by applying the solution of the present application, a user may trigger a navigation instruction for the target object in other applications than the navigation application in an electronic device, and implement automatic navigation, without manually opening navigation software and inputting a navigation destination, implement cross-application automatic start of navigation, and the operation is simple and flexible, so the present application effectively improves an intelligent degree of navigation, the convenience and flexibility of the navigation application are improved.
Referring to fig. 2, it is a flowchart of a second embodiment of a navigation method provided in the present application, and the second embodiment further details the navigation method, as shown in fig. 2, the navigation method may be implemented by the following processing procedures:
step 201, detecting operation information when an operation body executes a preset operation on a target object in the first application; the predetermined operation is used to trigger navigation.
In practical applications, a user may trigger a navigation instruction for a target object required in a first application of the electronic device by performing a corresponding operation on the first application using an operation body such as a finger or a stylus pen of the user.
The predetermined operation (for triggering a navigation instruction for the target object) performed by the user on the first application of the electronic device by using the operation body may include, but is not limited to, performing a predetermined gesture on the target object (such as text, picture, or video) displayed on the application interface of the first application, by which navigation of the target object is directly triggered, such as a gesture of pressing the target object by a single finger/double finger with a pressing time exceeding a preset threshold or a pressing pressure exceeding a preset threshold, a gesture of triple-clicking the target object, a gesture of sliding on the target object, even an air gesture having a predetermined gesture characteristic and not contacting the target object displayed on the first application interface, and the like; or an operation item selection menu for the target object can be called up by executing a certain operation, various selectable operation menu items for the target object are provided in the menu, and a navigation menu item is preset in the operation menu item, for example, menu options of copy, cut, send, navigation and emoticon adding are preset in the operation menu, so that after the selection menu is called up, a navigation instruction for the target object can be triggered by selecting the navigation menu item.
When the user performs a corresponding operation of the above operations on the target object in the first application by using the operating body thereof, the electronic device may detect operation information when the user performs the operation by using the related sensing device and based on a corresponding detection technology, for the generation of a navigation instruction, such as detecting pressing pressure information when the user presses the target object by using a pressure sensor, detecting information that the user calls an operation item selection menu for the target object and selects a certain operation menu (such as "navigation") by using a touch sensor in combination with a screen positioning technology, and collecting gesture information when the user performs an air gesture for the target object displayed on a screen by using a camera.
And 202, generating a navigation instruction based on the operation information.
After detecting the operation information of the predetermined operation performed by the user operation body on the target object in the first application, if the electronic device identifies that the operation information is used for implementing navigation (for example, the pressure for pressing the target object exceeds a preset threshold value, the operation user is identified to trigger navigation, the air gesture of the user is identified as a predetermined navigation gesture based on the gesture matching technology, or the user is identified to select a "navigation" menu item in a called operation menu), the electronic device generates a navigation instruction based on the detected operation information used for triggering navigation.
Step 203, if the target object is a target picture in the first application, identifying and extracting a first picture content with a position attribute and/or a second picture content with position data in the target picture based on an Optical Character Recognition (OCR) technology; and determining the first picture content and/or the second picture content as first position information of the target picture.
The main objective of the present application is to automatically start navigation directly based on a target object in a first application (which is different from the prior art in which navigation software needs to be manually opened by a user and destination information needs to be input in a corresponding input box of a software interface for navigation), and therefore, the target object needs to carry corresponding location information.
As mentioned above, the target object is any one or combination of text, picture or video in the first application. If the target object is a target picture in the first application, such as a picture shared by friends in communication applications such as WeChat and QQ, or a picture in a user album, the first picture content with the position attribute in the target picture and/or the second picture content with the position data can be identified and extracted based on an Optical Character Recognition (OCR) technology.
The first picture content with the location attribute in the target picture may be, for example, a mall logo pattern, a restaurant name, a brand identification pattern, and the like in the picture, and after identifying the information, the corresponding location information may be matched for the target picture based on the map information during subsequent navigation, so that the information does not directly record address data, but has the location attribute, and the location information corresponding to the target picture may be indirectly represented through the location attribute, and therefore, the information such as the mall logo pattern, the restaurant name, the brand identification pattern, and the like in the target picture may be used as the first location information of the target picture.
The second picture content carrying the position data in the target picture may be, but is not limited to, address information located when the picture is taken, for example, a picture shared by a friend or a picture in a user album directly locates and loads the shooting address information in the picture (generally, the shooting address information is loaded in the picture in a text form) when the picture is taken, so that the text describing the address information can be directly identified based on an optical character recognition OCR technology and is used as the first position information of the target picture.
For other types of target objects, such as texts, address description information in the texts can be directly identified and extracted, and then first position information corresponding to the target objects is determined, for small videos, the small videos can be divided into a plurality of images according to frames, for each frame of image, first image content with position attributes and/or second image content with position data can be identified according to the optical character identification technology, and then the corresponding first position information is determined.
And step 204, starting and running a second application for navigation, wherein the first application is different from the second application.
On the basis of the above steps, in this step, the electronic device automatically starts and runs the second application for navigation in response to the navigation instruction triggered by the target object in the first application.
Step 205, submitting the first location information to the second application.
After the second application is started and operated, the first position information of the target picture is submitted to the second application, the first position information is first picture content (such as a business logo and a restaurant name) with position attributes in the target picture and/or position data (such as address information positioned during photographing) carried by the second picture content of the target picture, and the second application can determine a navigation destination based on the information of the target picture, for example, position information is matched for the business logo, the restaurant name and the like in the picture based on map information and is used as a destination position, or the destination position is determined by directly utilizing the address information positioned during photographing in the target picture.
Step 206, obtaining second position information of the navigation start position in the second application based on the positioning technology.
In view of the premise that the navigation start position and the navigation destination position are obtained, the electronic device needs to obtain second position information of the navigation start position based on a positioning technology in the second application.
Step 207, in the second application, navigating based on the second position information of the navigation start position and the first position information for indicating a navigation destination.
On the basis of obtaining second position information of the navigation starting position and determining a navigation destination based on the first position information of the target object, the electronic equipment can utilize a second application to navigate the user so as to enable the user to be navigated to the position indicated by the first position information of the target object.
By applying the scheme, a user can trigger a navigation instruction for a target object in other applications except navigation applications in electronic equipment, such as applications of WeChat, QQ, photo album, mail, browser and the like, and automatic navigation is realized without manually opening navigation software and inputting a navigation destination, so that cross-application automatic start of navigation is realized, the operation is simple and flexible, the intelligent degree of navigation can be effectively improved, and the convenience and flexibility of navigation applications are improved.
Referring to fig. 3, which is a flowchart of a third embodiment of a navigation method provided in the present application, in the third embodiment of the present application, a possible implementation manner of the step 207 (in the second application, navigating based on the second position information of the navigation start position and the first position information for indicating a navigation destination) is provided, as shown in fig. 3, the step 207 may be implemented by the following processing procedures:
step 301, if the first location information only includes the first picture content, determining a first location attribute value of the first picture content that matches with second location information of the navigation start location, where the first location attribute value is location information; and navigating in the second application based on the second position information of the navigation starting position and the first position attribute value of the first picture content.
If the first location information of the target picture only includes first picture content with a location attribute, for example, a picture shared by a friend or a picture in a user album or a picture from a network only includes information such as a business logo, a brand identifier or a restaurant name, but does not include location data such as an address located when a picture is taken, a first location attribute value of the first picture content matching with the second location information of the navigation starting location can be determined, and the first location attribute value is location information.
As an example, the first picture content, such as a merchant logo, a brand identifier, or a restaurant name of the target picture, is preferentially matched with a location information which is closest to the second location information of the navigation start location and matches the first picture content, for example, a merchant location which is closest to the current location of the user and has the merchant logo in the target picture is found as a first location attribute value of the target picture, as another example, a plurality of merchant locations which are around the current location of the user and have the merchant logo in the target picture may also be found and recommended to the user, and a location information (i.e., the first location attribute value) is matched for the target picture based on a selection operation of the user, and the above-mentioned situation is only an exemplary description provided by the present application, and the specific implementation is not limited thereto.
On the basis, navigation can be performed in the second application based on the second position information of the navigation starting position and the first position attribute value of the first picture content.
Step 302, if the first location information includes the second picture content, navigating in the second application based on the second location information of the navigation start location and the location data carried in the second picture content.
On the contrary, if the first location information of the target picture includes the second picture content carrying the location data, for example, the first location information of the target picture only includes the second picture content carrying the location data (such as a location address when shooting) but not includes the first picture content having the location attribute (such as a merchant logo, etc.), or the first location information of the target picture includes both the second picture content carrying the location data and the first picture content having the location attribute, the location data carried in the second picture content of the target picture may be directly used as the location information for indicating the navigation destination, and further, navigation may be performed in the second application based on the second location information of the navigation start location and the location data carried in the second picture content.
By applying the scheme of the embodiment, a user can trigger a navigation instruction for a target object and realize automatic navigation in other applications except navigation applications in electronic equipment, such as applications of WeChat, QQ, photo album, mail, browser and the like, without manually opening navigation software and inputting a navigation destination, so that cross-application automatic start of navigation is realized, the operation is simple and flexible, the intelligent degree of navigation can be effectively improved, and the convenience and flexibility of navigation applications are improved.
Referring to fig. 4, which is a flowchart of a fourth embodiment of a navigation method provided in the present application, in the fourth embodiment of the present application, another possible implementation manner of the step 207 (in the second application, navigating based on the second position information of the navigation start position and the first position information for indicating the navigation destination) is provided, as shown in fig. 4, the step 207 may be implemented by the following processing procedures:
step 401, if the first position information includes the first picture content and the second picture content, determining whether the position data carried by the first picture content and the second picture content are matched.
In practical applications, the location data carried by the first picture content and the second picture content having the location attribute included in the picture may not match in reality due to a time transition or inaccurate positioning during photographing, for example, due to the time transition, a restaurant in a restaurant picture carrying a location address in a user album does not exist or has a changed location in reality, so that the restaurant or the like is not provided at a location indicated by the location address, and for this situation, in order to improve navigation accuracy, in this embodiment, when the first location information includes both the first picture content and the second picture content, it is first determined whether the first picture content having the location attribute of the target picture matches with the location data carried by the second picture content thereof.
Specifically, whether an object (such as a merchant, a restaurant, or the like) indicated by the first picture content is present at a position indicated by the position data carried by the second picture content in reality may be determined.
Step 402, if the two images are matched, navigating in the second application based on the second position information of the navigation starting position and the position data carried in the second image content.
If the position data is matched with the position data, it indicates that the position data carried in the second picture content of the target picture is accurate, that is, an object (such as a merchant, a restaurant, etc.) indicated by the first picture content of the target picture is actually provided at the position indicated by the position data carried in the second picture content of the target picture, in this case, the position data carried in the second picture content of the target picture can be directly used as information for indicating a navigation destination to position the navigation destination, so that, in this case, the electronic device can utilize the second application to navigate based on the second position information of the navigation starting position and the position data carried in the second picture content, and finally, the user can be navigated from the current position (i.e. the navigation starting position) to the position indicated by the position data carried in the second picture content of the target picture, essentially, i.e. the location where the object (e.g. business, restaurant, etc.) indicated by the first picture content corresponding to the target picture is located.
Step 403, if the first picture content is not matched with the navigation start position, determining second position attribute value of the first picture content, where the second position attribute value is matched with position data carried by the second picture content, and the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
On the contrary, if the position data carried in the second picture content of the target picture is not matched, the position data carried in the second picture content of the target picture is inaccurate, that is, the object indicated by the first picture content of the target picture is not available at the position indicated by the position data carried in the second picture content of the target picture in reality (such as a merchant, a restaurant and the like).
In this case, one possible implementation is to directly disregard the position data carried in the target picture and determine a second position attribute value of the first picture content that matches second position information of the navigation start position, where the second position attribute value is position information. And then navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
In this embodiment, as an example, first picture content such as a merchant logo, a brand identifier, or a restaurant name of the target picture may be preferentially matched with position information which is closest to the second position information of the navigation start position and matches with the first picture content, for example, a merchant position having the merchant logo in the target picture closest to the current position of the user is searched as a first position attribute value of the target picture; of course, as another example, a plurality of merchant positions around the current position of the user and having merchant logos in the target picture may also be found and recommended to the user, and a position information is matched for the target picture based on a selection operation of the user as the second position attribute value, so that navigation is performed in the second application based on the second position information of the navigation start position and the second position attribute value of the first picture content.
Another possible implementation manner is that a second position attribute value of the first picture content, which matches with position data carried by the second picture content, is determined, where the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
In this embodiment, as an example, first picture content such as a merchant logo, a brand identifier, or a restaurant name of the target picture may be preferentially matched with location information that is closest to a location indicated by location data carried by second picture content and matches the first picture content, for example, a location of a merchant that is closest to a location indicated by location data carried by second picture content of the target picture and has the merchant logo in the target picture is searched as a second location attribute value of the target picture (in practical applications, there may be a deviation between the merchant logo in the target picture and the carried location due to inaccurate positioning during photographing, and for this case, an address that is closest to the address information carried in the target picture and matches the picture content may be preferentially selected as a navigation destination); of course, as another example, a plurality of merchant positions having merchant logos in the target picture around a position indicated by position data carried by second picture content of the target picture may also be found and recommended to the user, and a position information is matched for the target picture based on a selection operation of the user as the second position attribute value, so that navigation is performed in the second application based on the second position information of the navigation start position and the second position attribute value of the first picture content.
Still taking the target picture which includes the merchant logo and carries the position data as an example, in practical application, if the two are not matched, the multiple merchant positions which are around the current position of the user and have the merchant logo in the target picture and the multiple merchant positions which are around the position indicated by the position data carried by the target picture and have the merchant logo in the target picture can be found out at the same time and recommended to the user, so that more selection spaces are provided for the user, a position information is matched for the target picture based on the user selection and serves as the second position attribute value, and finally, navigation is performed in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
For the situation that the target picture simultaneously includes the first picture content with the position attribute and the second picture content carrying the position data, the embodiment determines whether the first picture content with the position attribute of the target picture is matched with the position data carried by the second picture content of the target picture, and then determines the position information corresponding to the target picture to perform navigation based on the determination result, instead of directly performing navigation according to the position data carried in the target picture, so that the error navigation under the condition that the position data carried in the target picture is inaccurate can be effectively avoided, and the accuracy of navigation is further improved.
Corresponding to the above navigation method, the present application further provides an electronic device for navigation, where the electronic device may be, but is not limited to, a portable mobile terminal such as a smart phone, a tablet computer, a personal digital assistant, or a wearable smart terminal such as a smart band and a smart watch, the electronic device has a navigation function, the electronic device includes at least two applications capable of running on the electronic device, one of the applications is used to provide the navigation function, and at least one of the other applications may be, but is not limited to, an application for providing a communication function (e.g., a WeChat, a QQ, a mailbox, etc.), an application for providing a photographing/photographing function (e.g., a camera), an application for providing an image/video storage and management function (e.g., an album), an application for providing a schedule arrangement and a memo function (e.g., a notepad), or a mobile terminal, Any one or more of applications (e.g., a browser) for providing internet access functionality, and the like.
Referring to fig. 5, a schematic structural diagram of a fifth embodiment of an electronic device provided in the present application is shown in fig. 5, where the electronic device includes:
a memory 501 for storing at least one set of instructions.
A processor 502 for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining a navigation instruction for a target object in a first application;
obtaining first position information of the target object;
launching and running a second application for navigation, the first application distinct from the second application;
performing, with the second application, a navigation operation with a navigation destination being the location indicated by the first location information.
The first application is different from the second application for navigation, that is, the first application is another application installed on the electronic device, such as but not limited to the application for providing a communication function, the application for providing a photographing/shooting function, or the application for providing an image/video storage and management function, etc. as described above.
The target object may be any one or combination of a text, a picture, or a video in the first application, and may be, for example, a picture, a small video, or a text shared by friends in a communication application such as WeChat and QQ, a picture, a small video, or a text from a network, a text in a mail or a notepad, a picture in an album, a small video, or a picture, a small video, or the like taken by a user in real time with a camera.
In practical applications, a navigation instruction for a target object required in a first application of the electronic device may be triggered by a user by performing a corresponding operation on the first application.
The main objective of the present application is to automatically start navigation directly based on the target object in the first application (which is different from the prior art in which navigation software needs to be manually opened by a user and destination information needs to be input in a corresponding input box of a software interface for navigation), and therefore, the target object needs to carry corresponding location information.
The target object carries corresponding location information, which may specifically be that the target object carries data content capable of directly embodying the location information thereof, for example, a certain picture or a small video (which may be from a friend to share or a user album) directly carries address information positioned by a friend/user when taking a picture, and a certain text has related text content describing the address information; the target object carries corresponding location information, or the target object carries data content capable of indirectly representing the location information, for example, a certain picture or a small video includes a merchant logo pattern, a restaurant name and the like, and the merchant logo pattern, the restaurant name and the like can be matched with corresponding location information based on map information, so that the merchant logo pattern, the restaurant name and the like can be used as data content capable of indirectly representing the location information.
Certainly, in practical applications, the target object may also carry data content capable of directly representing the position information of the target object and data content capable of indirectly representing the position information of the target object, for example, a certain image in the album includes not only a logo pattern of a merchant but also position information located when the user shoots.
In view of this, the first location information corresponding to the target object may be determined by specifically identifying the data content that can directly represent the location information of the target object and/or the data content that can indirectly represent the location information of the target object. For example, address information (address information located when the picture is taken) carried in the picture is identified and extracted as first location information corresponding to the picture, or a merchant logo pattern or restaurant name included in the picture is identified and matched with location information based on map information, and the like.
On the basis of the processing, the electronic equipment responds to a navigation instruction triggered by a target object in the first application, and automatically starts and runs the second application for navigation.
And in the second application which is automatically started and operated, the position indicated by the first position information of the target object is used as a navigation destination to navigate based on the map information.
For example, if a WeChat friend of the user shares a restaurant picture with a positioning address, after the user performs a predetermined operation in the WeChat to trigger a navigation instruction for the picture, the electronic device may identify and extract address information in the picture, automatically start and run a second application for navigation, and then navigate the user using the extracted address information of the picture as a destination using the second application. From the perspective of a user, the navigation method can realize automatic navigation by directly executing a navigation operation aiming at a target object in the current application without manually opening and entering navigation software or inputting destination position information, realizes cross-application automatic start of navigation, and is simple and flexible to operate.
According to the above solutions, the electronic device provided in this embodiment may obtain a navigation instruction for a target object in a first application (different from a second application for navigation), and may implement navigation for the target object in the first application by obtaining first location information of the target object, starting and running the second application for navigation, and then performing a navigation operation on a location indicated by the first location information by using the second application as a navigation destination, so as to implement navigation for the target object in the first application in response to the navigation instruction, and thus, by applying the solution of the present application, a user may trigger a navigation instruction and implement automatic navigation for the target object in other applications besides the navigation application in the electronic device, and without manually opening navigation software and inputting a navigation destination, implement cross-application automatic start of navigation, and the operation is simple and flexible, so the present application effectively improves the intelligent degree of navigation, the convenience and flexibility of the navigation application are improved.
In the following sixth embodiment, further details of the processing function of the processor 502 in the electronic device are continued. In this embodiment, the processor 502 may implement navigation by executing the following processes:
detecting operation information when an operation body performs a predetermined operation on a target object in the first application; the predetermined operation is used for triggering navigation;
generating a navigation instruction based on the operation information;
if the target object is a target picture in the first application, identifying and extracting first picture content with position attributes and/or second picture content carrying position data in the target picture based on an Optical Character Recognition (OCR) technology; determining the first picture content and/or the second picture content as first position information of the target picture;
launching and running a second application for navigation, the first application distinct from the second application;
submitting the first location information to the second application;
obtaining second position information of a navigation starting position based on a positioning technology in the second application;
and navigating in the second application based on the second position information of the navigation starting position and the first position information used for indicating a navigation destination.
In practical applications, a user may trigger a navigation instruction for a target object required in a first application of the electronic device by performing a corresponding operation on the first application using an operation body such as a finger or a stylus pen of the user.
The predetermined operation (for triggering a navigation instruction for the target object) performed by the user on the first application of the electronic device by using the operation body may include, but is not limited to, performing a predetermined gesture on the target object (such as text, picture, or video) displayed on the application interface of the first application, by which navigation of the target object is directly triggered, such as a gesture of pressing the target object by a single finger/double finger with a pressing time exceeding a preset threshold or a pressing pressure exceeding a preset threshold, a gesture of triple-clicking the target object, a gesture of sliding on the target object, even an air gesture having a predetermined gesture characteristic and not contacting the target object displayed on the first application interface, and the like; or an operation item selection menu for the target object can be called up by executing a certain operation, various selectable operation menu items for the target object are provided in the menu, and a navigation menu item is preset in the operation menu item, for example, menu options of copy, cut, send, navigation and emoticon adding are preset in the operation menu, so that after the selection menu is called up, a navigation instruction for the target object can be triggered by selecting the navigation menu item.
When the user performs a corresponding operation of the above operations on the target object in the first application by using the operating body thereof, the electronic device may detect operation information when the user performs the operation by using the related sensing device and based on a corresponding detection technology, for the generation of a navigation instruction, such as detecting pressing pressure information when the user presses the target object by using a pressure sensor, detecting information that the user calls an operation item selection menu for the target object and selects a certain operation menu (such as "navigation") by using a touch sensor in combination with a screen positioning technology, and collecting gesture information when the user performs an air gesture for the target object displayed on a screen by using a camera.
After detecting the operation information of the predetermined operation performed by the user operation body on the target object in the first application, if the electronic device identifies that the operation information is used for implementing navigation (for example, the pressure for pressing the target object exceeds a preset threshold value, the operation user is identified to trigger navigation, the air gesture of the user is identified as a predetermined navigation gesture based on the gesture matching technology, or the user is identified to select a "navigation" menu item in a called operation menu), the electronic device generates a navigation instruction based on the detected operation information used for triggering navigation.
The main objective of the present application is to automatically start navigation directly based on a target object in a first application (which is different from the prior art in which navigation software needs to be manually opened by a user and destination information needs to be input in a corresponding input box of a software interface for navigation), and therefore, the target object needs to carry corresponding location information.
As mentioned above, the target object is any one or combination of text, picture or video in the first application. If the target object is a target picture in the first application, such as a picture shared by friends in communication applications such as WeChat and QQ, or a picture in a user album, the first picture content with the position attribute in the target picture and/or the second picture content with the position data can be identified and extracted based on an Optical Character Recognition (OCR) technology.
The first picture content with the location attribute in the target picture may be, for example, a mall logo pattern, a restaurant name, a brand identification pattern, and the like in the picture, and after identifying the information, the corresponding location information may be matched for the target picture based on the map information during subsequent navigation, so that the information does not directly record address data, but has the location attribute, and the location information corresponding to the target picture may be indirectly represented through the location attribute, and therefore, the information such as the mall logo pattern, the restaurant name, the brand identification pattern, and the like in the target picture may be used as the first location information of the target picture.
The second picture content carrying the position data in the target picture may be, but is not limited to, address information located when the picture is taken, for example, a picture shared by a friend or a picture in a user album directly locates and loads the shooting address information in the picture (generally, the shooting address information is loaded in the picture in a text form) when the picture is taken, so that the text describing the address information can be directly identified based on an optical character recognition OCR technology and is used as the first position information of the target picture.
For other types of target objects, such as texts, address description information in the texts can be directly identified and extracted, and then first position information corresponding to the target objects is determined, for small videos, the small videos can be divided into a plurality of images according to frames, for each frame of image, first image content with position attributes and/or second image content with position data can be identified according to the optical character identification technology, and then the corresponding first position information is determined.
On the basis of the processing, the electronic equipment responds to a navigation instruction triggered by a target object in the first application, and automatically starts and runs the second application for navigation.
After the second application is started and operated, the first position information of the target picture is submitted to the second application, the first position information is first picture content (such as a business logo and a restaurant name) with position attributes in the target picture and/or position data (such as address information positioned during photographing) carried by the second picture content of the target picture, and the second application can determine a navigation destination based on the information of the target picture, for example, position information is matched for the business logo, the restaurant name and the like in the picture based on map information and is used as a destination position, or the destination position is determined by directly utilizing the address information positioned during photographing in the target picture.
In view of the premise that the navigation start position and the navigation destination position are obtained, the electronic device needs to obtain second position information of the navigation start position based on a positioning technology in the second application.
On the basis of obtaining second position information of the navigation starting position and determining a navigation destination based on the first position information of the target object, the electronic equipment can utilize a second application to navigate the user so as to enable the user to be navigated to the position indicated by the first position information of the target object.
By applying the scheme, a user can trigger a navigation instruction for a target object in other applications except navigation applications in electronic equipment, such as applications of WeChat, QQ, photo album, mail, browser and the like, and automatic navigation is realized without manually opening navigation software and inputting a navigation destination, so that cross-application automatic start of navigation is realized, the operation is simple and flexible, the intelligent degree of navigation can be effectively improved, and the convenience and flexibility of navigation applications are improved.
In another embodiment, that is, embodiment seven, a possible implementation manner is provided that the processor 502 performs an operation of "navigating in the second application based on the second position information of the navigation start position and the first position information indicating the navigation destination", and in this embodiment, the processor may implement the above operation by the following processing procedures:
if the first position information only comprises the first picture content, determining a first position attribute value of the first picture content, which is matched with second position information of the navigation starting position, wherein the first position attribute value is position information; navigating in the second application based on the second position information of the navigation starting position and the first position attribute value of the first picture content;
and if the first position information comprises the second picture content, navigating in the second application based on the second position information of the navigation starting position and the position data carried in the second picture content.
If the first location information of the target picture only includes first picture content with a location attribute, for example, a picture shared by a friend or a picture in a user album or a picture from a network only includes information such as a business logo, a brand identifier or a restaurant name, but does not include location data such as an address located when a picture is taken, a first location attribute value of the first picture content matching with the second location information of the navigation starting location can be determined, and the first location attribute value is location information.
As an example, the first picture content, such as a merchant logo, a brand identifier, or a restaurant name of the target picture, is preferentially matched with a location information which is closest to the second location information of the navigation start location and matches the first picture content, for example, a merchant location which is closest to the current location of the user and has the merchant logo in the target picture is found as a first location attribute value of the target picture, as another example, a plurality of merchant locations which are around the current location of the user and have the merchant logo in the target picture may also be found and recommended to the user, and a location information (i.e., the first location attribute value) is matched for the target picture based on a selection operation of the user, and the above-mentioned situation is only an exemplary description provided by the present application, and the specific implementation is not limited thereto.
On the basis, navigation can be performed in the second application based on the second position information of the navigation starting position and the first position attribute value of the first picture content.
On the contrary, if the first location information of the target picture includes the second picture content carrying the location data, for example, the first location information of the target picture only includes the second picture content carrying the location data (such as a location address when shooting) but not includes the first picture content having the location attribute (such as a merchant logo, etc.), or the first location information of the target picture includes both the second picture content carrying the location data and the first picture content having the location attribute, the location data carried in the second picture content of the target picture may be directly used as the location information for indicating the navigation destination, and further, navigation may be performed in the second application based on the second location information of the navigation start location and the location data carried in the second picture content.
By applying the scheme of the embodiment, a user can trigger a navigation instruction for a target object and realize automatic navigation in other applications except navigation applications in electronic equipment, such as applications of WeChat, QQ, photo album, mail, browser and the like, without manually opening navigation software and inputting a navigation destination, so that cross-application automatic start of navigation is realized, the operation is simple and flexible, the intelligent degree of navigation can be effectively improved, and the convenience and flexibility of navigation applications are improved.
In yet another embodiment, that is, the eighth embodiment, another possible implementation manner is provided in which the processor 502 performs the operation of "navigating in the second application based on the second position information of the navigation start position and the first position information indicating the navigation destination", and in this embodiment, the processor may implement the above operation through the following processing procedures:
if the first position information comprises the first picture content and the second picture content, judging whether position data carried by the first picture content and the second picture content are matched;
if the image content is matched with the first image content, navigating in the second application based on second position information of the navigation starting position and the position data carried in the second image content;
if not, determining second position attribute value of the first picture content, which is matched with second position information of the navigation starting position or position data carried by the second picture content, wherein the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
In practical applications, the location data carried by the first picture content and the second picture content having the location attribute included in the picture may not match in reality due to a time transition or inaccurate positioning during photographing, for example, due to the time transition, a restaurant in a restaurant picture carrying a location address in a user album does not exist or has a changed location in reality, so that the restaurant or the like is not provided at a location indicated by the location address, and for this situation, in order to improve navigation accuracy, in this embodiment, when the first location information includes both the first picture content and the second picture content, it is first determined whether the first picture content having the location attribute of the target picture matches with the location data carried by the second picture content thereof.
Specifically, whether an object (such as a merchant, a restaurant, or the like) indicated by the first picture content is present at a position indicated by the position data carried by the second picture content in reality may be determined.
If the position data is matched with the position data, it indicates that the position data carried in the second picture content of the target picture is accurate, that is, an object (such as a merchant, a restaurant, etc.) indicated by the first picture content of the target picture is actually provided at the position indicated by the position data carried in the second picture content of the target picture, in this case, the position data carried in the second picture content of the target picture can be directly used as information for indicating a navigation destination to position the navigation destination, so that, in this case, the electronic device can utilize the second application to navigate based on the second position information of the navigation starting position and the position data carried in the second picture content, and finally, the user can be navigated from the current position (i.e. the navigation starting position) to the position indicated by the position data carried in the second picture content of the target picture, essentially, i.e. the location where the object (e.g. business, restaurant, etc.) indicated by the first picture content corresponding to the target picture is located.
On the contrary, if the position data carried in the second picture content of the target picture is not matched, the position data carried in the second picture content of the target picture is inaccurate, that is, the object indicated by the first picture content of the target picture is not available at the position indicated by the position data carried in the second picture content of the target picture in reality (such as a merchant, a restaurant and the like).
In this case, one possible implementation is to directly disregard the position data carried in the target picture and determine a second position attribute value of the first picture content that matches second position information of the navigation start position, where the second position attribute value is position information. And then navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
In this embodiment, as an example, first picture content such as a merchant logo, a brand identifier, or a restaurant name of the target picture may be preferentially matched with position information which is closest to the second position information of the navigation start position and matches with the first picture content, for example, a merchant position having the merchant logo in the target picture closest to the current position of the user is searched as a first position attribute value of the target picture; of course, as another example, a plurality of merchant positions around the current position of the user and having merchant logos in the target picture may also be found and recommended to the user, and a position information is matched for the target picture based on a selection operation of the user as the second position attribute value, so that navigation is performed in the second application based on the second position information of the navigation start position and the second position attribute value of the first picture content.
Another possible implementation manner is that a second position attribute value of the first picture content, which matches with position data carried by the second picture content, is determined, where the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
In this embodiment, as an example, first picture content such as a merchant logo, a brand identifier, or a restaurant name of the target picture may be preferentially matched with location information that is closest to a location indicated by location data carried by second picture content and matches the first picture content, for example, a location of a merchant that is closest to a location indicated by location data carried by second picture content of the target picture and has the merchant logo in the target picture is searched as a second location attribute value of the target picture (in practical applications, there may be a deviation between the merchant logo in the target picture and the carried location due to inaccurate positioning during photographing, and for this case, an address that is closest to the address information carried in the target picture and matches the picture content may be preferentially selected as a navigation destination); of course, as another example, a plurality of merchant positions having merchant logos in the target picture around a position indicated by position data carried by second picture content of the target picture may also be found and recommended to the user, and a position information is matched for the target picture based on a selection operation of the user as the second position attribute value, so that navigation is performed in the second application based on the second position information of the navigation start position and the second position attribute value of the first picture content.
Still taking the target picture which includes the merchant logo and carries the position data as an example, in practical application, if the two are not matched, the multiple merchant positions which are around the current position of the user and have the merchant logo in the target picture and the multiple merchant positions which are around the position indicated by the position data carried by the target picture and have the merchant logo in the target picture can be found out at the same time and recommended to the user, so that more selection spaces are provided for the user, a position information is matched for the target picture based on the user selection and serves as the second position attribute value, and finally, navigation is performed in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
For the situation that the target picture simultaneously includes the first picture content with the position attribute and the second picture content carrying the position data, the embodiment determines whether the first picture content with the position attribute of the target picture is matched with the position data carried by the second picture content of the target picture, and then determines the position information corresponding to the target picture to perform navigation based on the determination result, instead of directly performing navigation according to the position data carried in the target picture, so that the error navigation under the condition that the position data carried in the target picture is inaccurate can be effectively avoided, and the accuracy of navigation is further improved.
Corresponding to the above navigation method, the present application further provides another electronic device for navigation, where the electronic device may be, but is not limited to, a portable mobile terminal such as a smart phone, a tablet computer, a personal digital assistant, or a wearable smart terminal such as a smart band and a smart watch, the electronic device has a navigation function, the electronic device includes at least two applications capable of running on the electronic device, one of the applications is used to provide the navigation function, and at least one of the other applications may be, but is not limited to, an application for providing a communication function (e.g., a WeChat, a QQ, a mailbox, etc.), an application for providing a photographing/photographing function (e.g., a camera), an application for providing an image/video storage and management function (e.g., an album), an application for providing a schedule arrangement function and a memo function (e.g., a notepad), or a, Any one or more of applications (e.g., a browser) for providing internet access functionality, and the like.
Referring to fig. 6, it is a schematic structural diagram of a ninth embodiment of an electronic device provided in the present application, and as shown in fig. 6, the electronic device includes:
a first obtaining unit 601, configured to obtain a navigation instruction for a target object in a first application.
The first application is different from the second application for navigation, that is, the first application is another application installed on the electronic device, such as but not limited to the application for providing a communication function, the application for providing a photographing/shooting function, or the application for providing an image/video storage and management function, etc. as described above.
The target object may be any one or combination of a text, a picture, or a video in the first application, and may be, for example, a picture, a small video, or a text shared by friends in a communication application such as WeChat and QQ, a picture, a small video, or a text from a network, a text in a mail or a notepad, a picture in an album, a small video, or a picture, a small video, or the like taken by a user in real time with a camera.
In practical applications, a navigation instruction for a target object required in a first application of the electronic device may be triggered by a user by performing a corresponding operation on the first application.
A second obtaining unit 602, configured to obtain first position information of the target object.
The main objective of the present application is to automatically start navigation directly based on the target object in the first application (which is different from the prior art in which navigation software needs to be manually opened by a user and destination information needs to be input in a corresponding input box of a software interface for navigation), and therefore, the target object needs to carry corresponding location information.
The target object carries corresponding location information, which may specifically be that the target object carries data content capable of directly embodying the location information thereof, for example, a certain picture or a small video (which may be from a friend to share or a user album) directly carries address information positioned by a friend/user when taking a picture, and a certain text has related text content describing the address information; the target object carries corresponding location information, or the target object carries data content capable of indirectly representing the location information, for example, a certain picture or a small video includes a merchant logo pattern, a restaurant name and the like, and the merchant logo pattern, the restaurant name and the like can be matched with corresponding location information based on map information, so that the merchant logo pattern, the restaurant name and the like can be used as data content capable of indirectly representing the location information.
Certainly, in practical applications, the target object may also carry data content capable of directly representing the position information of the target object and data content capable of indirectly representing the position information of the target object, for example, a certain image in the album includes not only a logo pattern of a merchant but also position information located when the user shoots.
In view of this, the first location information corresponding to the target object may be determined by specifically identifying the data content that can directly represent the location information of the target object and/or the data content that can indirectly represent the location information of the target object. For example, address information (address information located when the picture is taken) carried in the picture is identified and extracted as first location information corresponding to the picture, or a merchant logo pattern or restaurant name included in the picture is identified and matched with location information based on map information, and the like.
A starting unit 603 configured to start and run a second application for navigation, where the first application is different from the second application.
On the basis of the processing, the electronic equipment responds to a navigation instruction triggered by a target object in the first application, and automatically starts and runs the second application for navigation.
A navigation unit 604, configured to perform a navigation operation with the second application, the navigation destination of which is the location indicated by the first location information.
And in the second application which is automatically started and operated, the position indicated by the first position information of the target object is used as a navigation destination to navigate based on the map information.
For example, if a WeChat friend of the user shares a restaurant picture with a positioning address, after the user performs a predetermined operation in the WeChat to trigger a navigation instruction for the picture, the electronic device may identify and extract address information in the picture, automatically start and run a second application for navigation, and then navigate the user using the extracted address information of the picture as a destination using the second application. From the perspective of a user, the navigation method can realize automatic navigation by directly executing a navigation operation aiming at a target object in the current application without manually opening and entering navigation software or inputting destination position information, realizes cross-application automatic start of navigation, and is simple and flexible to operate.
According to the above solutions, the electronic device provided in this embodiment may obtain a navigation instruction for a target object in a first application (different from a second application for navigation), and may implement navigation for the target object in the first application by obtaining first location information of the target object, starting and running the second application for navigation, and then performing a navigation operation on a location indicated by the first location information by using the second application as a navigation destination, so as to implement navigation for the target object in the first application in response to the navigation instruction, and thus, by applying the solution of the present application, a user may trigger a navigation instruction and implement automatic navigation for the target object in other applications besides the navigation application in the electronic device, and without manually opening navigation software and inputting a navigation destination, implement cross-application automatic start of navigation, and the operation is simple and flexible, so the present application effectively improves the intelligent degree of navigation, the convenience and flexibility of the navigation application are improved.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
For convenience of description, the above system or apparatus is described as being divided into various modules or units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A navigation method, comprising:
obtaining a navigation instruction for a target object in a first application;
obtaining first position information of the target object;
launching and running a second application for navigation, the first application distinct from the second application;
performing, with the second application, a navigation operation with a navigation destination being the location indicated by the first location information, including:
if the first position information comprises first picture content with position attributes in a target picture and second picture content carrying position data, judging whether the first picture content is matched with the position data carried by the second picture content; the judging whether the position data carried by the first picture content and the second picture content are matched comprises: judging whether an object indicated by the first picture content is provided at a position indicated by the position data carried by the second picture content;
if not, determining second position attribute value of the first picture content, which is matched with second position information of a navigation initial position or position data carried by the second picture content, wherein the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
2. The method of claim 1, wherein obtaining navigation instructions for a target object in a first application comprises:
detecting operation information when an operation body performs a predetermined operation on a target object in the first application; the predetermined operation is used for triggering navigation;
and generating a navigation instruction based on the operation information.
3. The method according to claim 1, wherein the target object is any one or combination of text, picture or video in the first application;
if the target object is a target picture in the first application, the obtaining of the first position information of the target object includes:
recognizing and extracting first picture content with position attributes and/or second picture content carrying position data in the target picture based on an Optical Character Recognition (OCR) technology;
and determining the first picture content and/or the second picture content as first position information of the target picture.
4. The method of claim 3, wherein performing, with the second application, a navigation operation with a navigation destination that is a location indicated by the first location information comprises:
submitting the first location information to the second application;
obtaining second position information of a navigation starting position based on a positioning technology in the second application;
and navigating in the second application based on the second position information of the navigation starting position and the first position information used for indicating a navigation destination.
5. The method of claim 4, further comprising:
and if the first picture content is matched with the position data carried by the second picture content, navigating in the second application based on the second position information of the navigation initial position and the position data carried by the second picture content.
6. An electronic device comprising at least two applications capable of running on the electronic device, further comprising:
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining a navigation instruction for a target object in a first application;
obtaining first position information of the target object;
launching and running a second application for navigation, the first application distinct from the second application;
performing, with the second application, a navigation operation whose navigation destination is the location indicated by the first location information;
wherein the processor performs a navigation operation with the second application, the navigation operation having a navigation destination that is a location indicated by the first location information, including:
if the first position information comprises first picture content with position attributes in a target picture and second picture content carrying position data, judging whether the first picture content is matched with the position data carried by the second picture content; the judging whether the position data carried by the first picture content and the second picture content are matched comprises: judging whether an object indicated by the first picture content is provided at a position indicated by the position data carried by the second picture content;
if not, determining second position attribute value of the first picture content, which is matched with second position information of a navigation initial position or position data carried by the second picture content, wherein the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
7. The electronic device of claim 6, wherein the target object is any one or combination of text, picture or video in the first application;
if the target object is a target picture in the first application, the processor obtains first position information of the target object, and specifically includes:
identifying and extracting first picture content with position attributes and/or second picture content carrying position data in the target picture based on an OCR technology;
and determining the first picture content and/or the second picture content as first position information of the target picture.
8. The electronic device according to claim 7, wherein the processor performs a navigation operation with the second application, the navigation operation being performed with a navigation destination being a location indicated by the location information, and specifically includes:
submitting the first location information to the second application;
obtaining second position information of a navigation starting position based on a positioning technology in the second application;
and navigating in the second application based on the second position information of the navigation starting position and the first position information used for indicating a navigation destination.
9. An electronic device comprising at least two applications capable of running on the electronic device, further comprising:
a first acquisition unit for acquiring a navigation instruction for a target object in a first application;
a second acquisition unit configured to acquire first position information of the target object;
the starting unit is used for starting and running a second application for navigation, and the first application is different from the second application;
a navigation unit configured to perform a navigation operation with the second application, the navigation destination being a location indicated by the first location information;
wherein the navigation unit performs a navigation operation of navigating a location indicated by the first location information with the second application, including:
if the first position information comprises first picture content with position attributes in a target picture and second picture content carrying position data, judging whether the first picture content is matched with the position data carried by the second picture content; the judging whether the position data carried by the first picture content and the second picture content are matched comprises: judging whether an object indicated by the first picture content is provided at a position indicated by the position data carried by the second picture content;
if not, determining second position attribute value of the first picture content, which is matched with second position information of a navigation initial position or position data carried by the second picture content, wherein the second position attribute value is position information; and navigating in the second application based on the second position information of the navigation starting position and the second position attribute value of the first picture content.
CN201811109880.0A 2018-09-21 2018-09-21 Navigation method and electronic equipment Active CN109084750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811109880.0A CN109084750B (en) 2018-09-21 2018-09-21 Navigation method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811109880.0A CN109084750B (en) 2018-09-21 2018-09-21 Navigation method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109084750A CN109084750A (en) 2018-12-25
CN109084750B true CN109084750B (en) 2021-07-16

Family

ID=64842418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811109880.0A Active CN109084750B (en) 2018-09-21 2018-09-21 Navigation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109084750B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112231042B (en) * 2020-12-17 2021-10-26 智道网联科技(北京)有限公司 Interaction method and device based on navigation information, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101319900A (en) * 2007-06-08 2008-12-10 杨爱国 Method for implementing navigation by photograph based on mobile phone
CN101482420A (en) * 2008-12-19 2009-07-15 深圳市同洲电子股份有限公司 Intelligent navigation apparatus, navigation terminal and its information navigation method
CN101782923A (en) * 2009-01-15 2010-07-21 罗伯特·博世有限公司 Location based system utilizing geographical information from documents in natural language
CN101896952A (en) * 2007-12-13 2010-11-24 佳明有限公司 Automatically discern the positional information in the text data
CN106679665A (en) * 2016-12-13 2017-05-17 腾讯科技(深圳)有限公司 Route planning method and route planning device
CN108076437A (en) * 2018-01-01 2018-05-25 刘兴丹 A kind of method, apparatus of the map software containing picture, location information and motion track

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807251A (en) * 2009-02-12 2010-08-18 英华达(上海)科技有限公司 Handheld electronic device
CN108020225A (en) * 2016-10-28 2018-05-11 大辅科技(北京)有限公司 Map system and air navigation aid based on image recognition
CN108469266A (en) * 2018-03-26 2018-08-31 联想(北京)有限公司 Air navigation aid, device and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101319900A (en) * 2007-06-08 2008-12-10 杨爱国 Method for implementing navigation by photograph based on mobile phone
CN101896952A (en) * 2007-12-13 2010-11-24 佳明有限公司 Automatically discern the positional information in the text data
CN101482420A (en) * 2008-12-19 2009-07-15 深圳市同洲电子股份有限公司 Intelligent navigation apparatus, navigation terminal and its information navigation method
CN101782923A (en) * 2009-01-15 2010-07-21 罗伯特·博世有限公司 Location based system utilizing geographical information from documents in natural language
CN106679665A (en) * 2016-12-13 2017-05-17 腾讯科技(深圳)有限公司 Route planning method and route planning device
CN108076437A (en) * 2018-01-01 2018-05-25 刘兴丹 A kind of method, apparatus of the map software containing picture, location information and motion track

Also Published As

Publication number Publication date
CN109084750A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
US11460983B2 (en) Method of processing content and electronic device thereof
US11582176B2 (en) Context sensitive avatar captions
CN109189879B (en) Electronic book display method and device
US9239961B1 (en) Text recognition near an edge
WO2019174398A1 (en) Method, apparatus, and terminal for simulating mouse operation by using gesture
CN117473127A (en) Computer-implemented method, system, and non-transitory computer storage medium
CN112236767A (en) Electronic device and method for providing information related to an image to an application through an input unit
CN111506758A (en) Method and device for determining article name, computer equipment and storage medium
CN105893613B (en) image identification information searching method and device
US9147109B2 (en) Method for adding business card information into contact list
CN108256071B (en) Method and device for generating screen recording file, terminal and storage medium
KR20180121273A (en) Method for outputting content corresponding to object and electronic device thereof
CN108898649A (en) Image processing method and device
CN109669710B (en) Note processing method and terminal
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN109084750B (en) Navigation method and electronic equipment
US11544921B1 (en) Augmented reality items based on scan
US11250091B2 (en) System and method for extracting information and retrieving contact information using the same
EP4060521A1 (en) Method for providing tag, and electronic device for supporting same
CN111027353A (en) Search content extraction method and electronic equipment
KR20150097250A (en) Sketch retrieval system using tag information, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor
CN109766052B (en) Dish picture uploading method and device, computer equipment and readable storage medium
CN111079727A (en) Point reading control method and electronic equipment
CN111652182B (en) Method and device for identifying suspension gesture, electronic equipment and storage medium
CN111418196A (en) Mobile communication terminal and communication method based on face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant