CN114090818A - Navigation method and device and electronic equipment - Google Patents

Navigation method and device and electronic equipment Download PDF

Info

Publication number
CN114090818A
CN114090818A CN202111373030.3A CN202111373030A CN114090818A CN 114090818 A CN114090818 A CN 114090818A CN 202111373030 A CN202111373030 A CN 202111373030A CN 114090818 A CN114090818 A CN 114090818A
Authority
CN
China
Prior art keywords
information
target
navigation
input
navigation path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111373030.3A
Other languages
Chinese (zh)
Inventor
脱召兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111373030.3A priority Critical patent/CN114090818A/en
Publication of CN114090818A publication Critical patent/CN114090818A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Remote Sensing (AREA)
  • Navigation (AREA)

Abstract

The application discloses a navigation method, a navigation device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: acquiring a first image, wherein the first image comprises a geographical position identifier; receiving a first input of a user for the geographic location identification; responding to the first input, and acquiring target destination information corresponding to the geographic position identification; and generating and displaying navigation path information according to the target starting point information and the target end point information, wherein the target starting point information is the position information of the electronic equipment.

Description

Navigation method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a navigation method, a navigation device and electronic equipment.
Background
With the development of terminal technology, more and more functions can be realized on electronic equipment. For example, the electronic device may provide navigation functionality to the user during a daily trip.
In the related art, when using an electronic device for navigation, a user needs to open a navigation application and input a starting point and a destination to generate a corresponding navigation route, which is cumbersome to operate.
Disclosure of Invention
The embodiment of the application aims to provide a navigation method, a navigation device and electronic equipment, and can solve the problems that in the prior art, a user needs to open a navigation application program first and manually input a starting point and a destination to generate a corresponding navigation route, and the operation is complicated.
In a first aspect, an embodiment of the present application provides a navigation method, where the method includes:
acquiring a first image, wherein the first image comprises a geographical position identifier;
receiving a first input of a user for the geographic location identification;
responding to the first input, and acquiring target destination information corresponding to the geographic position identification;
and generating and displaying navigation path information according to the target starting point information and the target end point information, wherein the target starting point information is the position information of the electronic equipment.
In a second aspect, an embodiment of the present application provides a navigation device, including:
the first acquisition module is used for acquiring a first image, and the first image comprises a geographical position identifier;
the first receiving module is used for receiving a first input of a user aiming at the geographic position identification;
the second acquisition module is used for responding to the first input and acquiring target end point information corresponding to the geographic position identification;
the generating module is used for generating navigation path information according to target starting point information and the target end point information, wherein the target starting point information is position information of the electronic equipment;
and the display module is used for displaying the navigation path information.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image comprising a geographical position identifier is acquired, target end point information corresponding to the geographical position identifier is acquired in response to a first input of a user for the geographical position identifier, and navigation path information is generated and displayed according to the target start point information and the target end point information. Therefore, the target end point information can be obtained by identifying the geographical position identification in the image, the current interface does not need to be quitted, a navigation application program is searched and started, the input cost of a user can be reduced, the operation is simple, and the navigation path information can be quickly generated.
Drawings
Fig. 1 is a flowchart of a navigation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a display interface provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of another display interface provided by an embodiment of the application;
FIG. 4 is a schematic diagram of sending navigation information according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a navigation device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The navigation method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by using a specific embodiment and an application scenario thereof.
Please refer to fig. 1, which is a flowchart illustrating a navigation method according to an embodiment of the present application. The method can be applied to electronic equipment, and the electronic equipment can be a mobile phone, a tablet computer, a notebook computer and the like. As shown in FIG. 1, the method may include steps 1100-1400, described in detail below.
Step 1100, a first image is obtained, wherein the first image comprises a geographical location identifier.
In this embodiment, the first image may be an image containing geographical location information. The first image includes at least one geographic location identifier. For example, an image captured by a map-like application. Also for example, an image captured by a user via a camera of the electronic device. The first image may also be an image sent from a communicating object.
The geographical location identification may be a graphical identification for marking a geographical location. The geographical location identity may be a graphical identity that is common in maps. The geographical location identifier may be a red droplet-shaped identifier, a blue droplet-shaped identifier, a drawing pin-shaped identifier, or a flag-shaped identifier, which may be determined according to actual use requirements, and is not limited in this application.
Step 1200, receiving a first input of a user for the geographic location identification.
In this embodiment, the first input may be a click input by the user for the identification of the geographic location. The click input in the embodiment of the application may be click input, double-click input, click input of any number of times, long-press input or short-press input, and may be specifically determined according to actual use requirements, which is not limited in the embodiment of the application.
In some embodiments, the first input may be a user click input to a target area in which the geographic location identification is located. The target area may be the area where the geographical position indication is located, i.e. the target area may be a smaller area of the first image comprising the geographical position indication. The position of the target area in which the geographic position identifier is located in the first image may represent the position of the geographic position identifier in the first image. According to the position of the target area where the geographical position identifier is located in the first image, the geographical position identifier selected by the user can be determined.
In this embodiment, the target destination information corresponding to the geographical location identifier is acquired based on the first input of the user to the geographical location identifier, so that the success rate of recognizing the geographical location identifier can be improved, and the user experience is better.
Step 1300, in response to the first input, obtaining target destination information corresponding to the geographic location identifier.
The target endpoint information may be location information corresponding to a user-selected geographic location identification. The target destination information may be, for example, a mall name, a school name, a street name, a building name, a station name, and the like.
In step 1300, detecting the first image, and determining whether the first image includes the geographic location identifier; under the condition that the first image comprises the geographic position identifier, positioning a target area where the geographic position identifier is located; then, when first input of the user to the target area where the geographical position identification is located is received, comparing the click position of the user with the target area where the geographical position identification is located to determine the geographical position identification clicked by the user; and then, acquiring target end point information corresponding to the geographical position identification. It should be noted that the geographic location identifier in the first image may be detected based on an existing recognition technology, where the existing recognition technology may be Open CV, Open GL, machine learning, or other technologies.
In this embodiment, the first image may include a geographical location identification. The first image may also include a geographic location identifier and textual identifying information corresponding to the geographic location identifier. The textual identification information corresponding to the geographic location identifier may be textual information tagged in proximity to the geographic location identifier. Such as a mall name, a school name, a street name, a building name, etc. Based on this, in the case that the first image further includes text identification information corresponding to the geographical location identifier, the target destination information corresponding to the geographical location identifier may be acquired by taking the text identification information corresponding to the geographical location identifier as the target destination information. In the case that the first image does not include the character identification information corresponding to the geographic position identifier, the target destination information corresponding to the geographic position identifier is acquired, and character recognition may be performed on the first image to determine the target destination information corresponding to the geographic position identifier. The following examples are given by way of illustration.
In some optional embodiments, the obtaining target endpoint information corresponding to the geographic location identifier in response to the first input includes: responding to the first input, performing character recognition on the first image, and obtaining a plurality of geographic position information; determining target geographical position information according to the plurality of geographical position information, wherein the distance between the area where the target geographical position information is located and the target area where the geographical position mark is located meets a preset condition; and taking the target geographical position information as the target destination information.
In this embodiment, the geographic location information may be a geographic location vocabulary obtained by performing character recognition on the first image. Such as building names, road names, sight names, etc.
The target geographical location information may be geographical location information satisfying a preset condition among the plurality of geographical location information. The preset condition may be a distance condition, that is, the target geographical location information is determined from the multiple pieces of geographical location information that are identified according to a distance between an area where the geographical location information is located and a target area where the geographical location identifier is located. Optionally, the geographical location information closest to the geographical location identifier clicked by the user in the plurality of pieces of geographical location information is used as the target endpoint information.
For example, detecting the first image, and determining a target area where the geographic position identifier is located; then, in response to a first input of the target area of the geographical position identification by the user, the first image is subjected to character recognition, and a plurality of geographical position information, such as 'century building', 'inner loop' and 'coastal park', is obtained. Assuming that the "century building" is geographical position information closest to the geographical position mark clicked by the user, the "century building" is taken as target end point information.
In this embodiment, in response to a first input of a geographic location identifier by a user, text recognition is performed on a first image to obtain a plurality of pieces of geographic location information, and target geographic location information meeting a preset condition in the plurality of pieces of geographic location information is used as target destination information.
After step 1300, step 1400 is executed to generate and display navigation path information according to the target starting point information and the target end point information, where the target starting point information is position information where the electronic device is located.
In the present embodiment, the navigation path information may be path information from the target start point information to the target end point information. The target starting point information may be position information where the electronic device is located, and the target ending point information may be position information corresponding to a geographical position identifier clicked by the user.
In some embodiments of the present application, the displaying navigation path information may further include: and displaying the navigation path information in the form of a floating control.
In this embodiment, the hover control may include a navigation interface. The navigation interface may include navigation path information. The navigation interface may further include target start point information, target end point information, and travel mode information. The navigation interface may also include other information, which is not limited in this embodiment of the present application.
For example, please refer to fig. 2, which is a schematic diagram of a display interface according to an embodiment of the present application. Specifically, the display interface of the electronic device includes a first image 201, acquires target end point information corresponding to a geographical position identifier in response to a first input of the geographical position identifier in the first image 201 by a user, generates navigation path information according to the target end point information and the target end point information, and displays the navigation path information in the form of a floating control 202. Wherein, the hover control 202 may include a navigation interface that includes target start point information and target end point information. The navigation interface also includes travel mode information, such as "drive", "public transport", "walk", and the like. The navigation interface also comprises navigation path information, for example, for a public transportation travel mode, the navigation path information is line 9-line 6-line 4. It should be noted that the target start point information displayed in the navigation interface may be location information of the electronic device, the target end point information may be identified location information corresponding to the geographic location identifier, and the user may also change the target start point information and the target end point information according to actual needs.
In the embodiment, after the navigation path information is generated, the navigation path information is displayed in the form of a floating control, and the user does not need to jump to a map application, so that the response speed is higher. In addition, when the navigation path information is displayed, the user can perform other operations on the electronic equipment, and the use is more flexible.
In some embodiments, after displaying the navigation path information in the form of a hover control, receiving a fourth input of the user to the hover control; in response to a fourth input, the hover control is closed. In this embodiment, the user can close the floating control according to actual needs, and the operation is simple.
In this embodiment, the fourth input may be: the method includes that a user clicks and inputs a target icon of a floating control, or a voice instruction input by the user, or a specific gesture input by the user can be specifically determined according to actual use requirements, and the method is not limited in the embodiment of the application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
In some embodiments of the application, after the displaying the navigation path information in the form of a floating control, the method may further include: receiving a second input of the navigation path information by the user; and responding to the second input, zooming out and displaying the suspension control, and performing route navigation according to the navigation path information.
In this embodiment, the second input may be an input confirming the start of navigation. Illustratively, the second input may be: the target icon in the floating control is clicked by the user for inputting, or is a voice instruction input by the user, or is a specific gesture input by the user, which may be specifically determined according to actual use requirements, and this is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be a single click input, a double click input, or any number of times of click input, and may also be a long-press input or a short-press input.
Referring to fig. 3, in a specific implementation, the navigation path information is displayed in the form of a floating control, where the floating control may be a navigation interface, the navigation interface may include travel mode information and navigation path information, and a user may select one travel mode and select one piece of navigation path information from multiple pieces of navigation path information corresponding to the travel mode, that is, perform route navigation using the navigation path information. At this time, the hover control is displayed in a reduced manner, i.e., the hover control is displayed with the navigation thumbnail page 301. The navigation thumbnail page 301 displays a target start point, a target end point, and a current position of the user. The navigation thumbnail page 301 updates the current position of the user in real time, and when the current position of the user changes, the position of the user navigating the detail page also changes.
In this embodiment, in the process of performing route navigation according to the navigation path information selected by the user, the suspension control is reduced and displayed, and when the suspension control displays the navigation path information, the suspension control is prevented from shielding the screen, so that the user experience is better.
In some embodiments of the application, after the displaying the navigation path information in the form of a floating control, the method may further include: receiving a third input of the user to the hover control and the first application; and responding to the third input, and sending navigation information to the target contact in the first application.
In this embodiment, the first application may be, for example, an instant messaging application. The target contact in the first application may be, for example, a communication friend added in the first application.
The navigation information includes at least one of: navigation path information, position information of the electronic equipment and predicted arrival time information. The predicted arrival time can be determined according to the position of the electronic equipment, the target destination information and the travel mode.
The third input may be an input to send navigation information. Illustratively, the third input may be an input dragging the hover control to the first application.
For example, referring to fig. 4, in the navigation process, when a user needs to send navigation information to a friend, a friend session interface in a first application is opened, and a floating control is dragged to an input box 401 of the friend session interface, at this time, the navigation information is sent to the friend of the first application.
In the embodiment, in the navigation process, the navigation information is sent to the target contact in the first application in response to the third input of the user to the floating control and the first application, the navigation information does not need to be manually input by the user, and the operation is convenient and fast.
In some embodiments of the present application, after generating and displaying navigation path information according to target start point information and the target end point information, the method may further include: acquiring the position information of the electronic equipment in the process of performing route navigation according to the navigation path information in a first travel mode; and outputting prompt information under the condition that the position information of the electronic equipment is preset first position information.
In this embodiment, the first travel mode may be a public transportation travel mode.
Illustratively, the preset first location information may be transfer station information, for example, a subway transfer station, a bus transfer station, and the like. The preset first location information may also be a popular site, e.g., attraction site, etc. Illustratively, the preset first location information may also be previous site information of the target destination information. That is, when the current position is the penultimate station from the target end point, the guidance information is output. It should be noted that the first position information may be set according to an actual need of a user, and this is not limited in this embodiment of the application.
In this embodiment, the prompt information may be output in a preset manner. For example, the floating control is controlled to flash, for example, a display color of the floating control is updated, for example, the electronic device is controlled to output pop-up window text prompt information, for example, the electronic device is controlled to send a vibration prompt, for example, the electronic device is controlled to send a voice prompt information, which may be determined according to actual use requirements, and the embodiment of the present application is not limited thereto.
In the embodiment, in the process of performing route navigation in a public transportation manner, when the position information where the user is located is the preset first position information, the prompt information is output, the user does not need to check the navigation information in real time, the user is automatically prompted when the user arrives at the transfer station or is about to arrive at the target destination, the user is prevented from missing the target destination, and the user experience is better.
In some embodiments of the application, after displaying the navigation path information in the form of a hover control, the method may further include: and closing the floating control when the path navigation is finished. Therefore, the suspension control does not need to be manually closed by a user, the operation flow can be saved, and the use is more convenient.
In the embodiment of the application, a first image including a geographical position identifier is acquired, target end point information corresponding to the geographical position identifier is acquired in response to a first input aiming at the geographical position identifier, and navigation path information is generated and displayed according to the target start point information and the target end point information. Therefore, the target end point information can be obtained by identifying the geographical position identification in the image, the current interface does not need to be quitted, a navigation application program is searched and started, the input cost of a user can be reduced, the operation is simple, and the navigation path information can be quickly generated.
In the navigation method provided by the embodiment of the application, the execution main body can be a navigation device. In the embodiment of the present application, a method for executing navigation by a navigation device is taken as an example to describe the navigation device provided in the embodiment of the present application.
Corresponding to the above embodiments, referring to fig. 5, an embodiment of the present application further provides a navigation device, where the navigation device 500 includes a first obtaining module 501, a first receiving module 502, a second obtaining module 503, and a first displaying module 504.
The first obtaining module 501 is configured to obtain a first image, where the first image includes a geographic location identifier;
the first receiving module 502 is configured to receive a first input of the user for the geographic location identifier;
the second obtaining module 503 is configured to, in response to the first input, obtain target destination information corresponding to the geographic location identifier;
the generating module 504 is configured to generate navigation path information according to target start point information and the target end point information, where the target start point information is position information of an electronic device;
the display module 505 is configured to display the navigation path information.
Optionally, the second obtaining module 503 includes: the character recognition unit is used for responding to the first input, performing character recognition on the first image and acquiring a plurality of geographical position information; the first determining unit is used for determining target geographical position information according to the plurality of geographical position information, wherein the distance between the area where the target geographical position information is located and the target area where the geographical position mark is located meets a preset condition; and the second determining unit is used for taking the target geographical position information as the target destination information.
Optionally, the display module is specifically configured to display the navigation path information in a form of a floating control; the device further comprises: the second receiving module is used for receiving second input of the navigation path information by the user; and the display module is also used for responding to the second input, reducing and displaying the suspension control and carrying out route navigation according to the navigation path information.
Optionally, the display module is specifically configured to display the navigation path information in a form of a floating control; the device further comprises: the third receiving module is used for receiving a third input of the user to the floating control and the first application; a sending module, configured to send navigation information to a target contact in the first application in response to the third input; wherein the navigation information comprises at least one of: the navigation path information, the position information of the electronic equipment and the predicted arrival time information.
Optionally, the apparatus further comprises: the third obtaining module is used for obtaining the position information of the electronic equipment in the process of carrying out route navigation according to the navigation path information in the first travel mode; and the prompt module is used for outputting prompt information under the condition that the position information of the electronic equipment is the preset first position information.
In the embodiment of the application, a first image comprising a geographical position identifier is acquired, target end point information corresponding to the geographical position identifier is acquired in response to a first input of a user for the geographical position identifier, and navigation path information is generated and displayed according to the target start point information and the target end point information. Therefore, the target end point information can be obtained by identifying the geographical position identification in the image, the current interface does not need to be quitted, a navigation application program is searched and started, the input cost of a user can be reduced, the operation is simple, and the navigation path information can be quickly generated.
The navigation device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The Mobile electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Storage), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The navigation device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The navigation apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in the embodiment of the present application, and includes a processor 601 and a memory 602, where a program or an instruction that is stored in the memory 602 and can be run on the processor 601 is executed by the processor 601, and the program or the instruction implements each step of the embodiment of the navigation method, and can achieve the same technical effect, and is not repeated here to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
A user input unit 707, configured to receive a first input of the user for the geographic location identifier; a processor 710 configured to: acquiring a first image, wherein the first image comprises a geographical position identifier; responding to the first input, and acquiring target destination information corresponding to the geographic position identification; generating navigation path information according to target starting point information and the target end point information, wherein the target starting point information is position information of the electronic equipment; a display unit 706, configured to display the navigation path information.
Optionally, the processor 710, when obtaining target endpoint information corresponding to the geographical location identity in response to the first input, is configured to: responding to the first input, performing character recognition on the first image, and obtaining a plurality of geographic position information; determining target geographical position information according to the plurality of geographical position information, wherein the distance between the area where the target geographical position information is located and the target area where the geographical position mark is located meets a preset condition; and taking the target geographical position information as the target destination information.
Optionally, the display unit 706 is configured to display the navigation path information in a form of a floating control when displaying the navigation path information; after the navigation path information is displayed in the form of a floating control, the user input unit 707 is further configured to receive a second input of the navigation path information by the user: the processor 710 is further configured to control the display unit 706 to zoom out and display the floating control in response to the second input, and perform route navigation according to the navigation path information.
Optionally, the display unit 706 is configured to display the navigation path information in a form of a floating control when displaying the navigation path information; after the navigation path information is displayed in the form of a floating control, the user input unit 707 is further configured to receive a third input of the user to the floating control and the first application; a processor 710 further configured to send navigation information to a target contact in the first application in response to the third input; wherein the navigation information comprises at least one of: the navigation path information, the position information of the electronic equipment and the predicted arrival time information.
Optionally, the processor 710, after generating and displaying the navigation path information according to the target start point information and the target end point information, is further configured to: acquiring the position information of the electronic equipment in the process of carrying out route navigation according to the navigation path information in a first travel mode; and outputting prompt information under the condition that the position information of the electronic equipment is preset first position information.
In the embodiment of the application, a first image comprising a geographical position identifier is acquired, target end point information corresponding to the geographical position identifier is acquired in response to a first input of a user for the geographical position identifier, and navigation path information is generated and displayed according to the target start point information and the target end point information. Therefore, the target end point information can be obtained by identifying the geographical position identification in the image, the current interface does not need to be quitted, a navigation application program is searched and started, the input cost of a user can be reduced, the operation is simple, and the navigation path information can be quickly generated.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, memory 109 may include volatile memory or non-volatile memory, or memory x09 may include both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the navigation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the navigation method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing navigation method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of navigation, the method comprising:
acquiring a first image, wherein the first image comprises a geographical position identifier;
receiving a first input of a user for the geographic location identification;
responding to the first input, and acquiring target destination information corresponding to the geographic position identification;
and generating and displaying navigation path information according to the target starting point information and the target end point information, wherein the target starting point information is the position information of the electronic equipment.
2. The method of claim 1, wherein said obtaining target endpoint information corresponding to the geographic location identifier in response to the first input comprises:
responding to the first input, performing character recognition on the first image, and obtaining a plurality of geographic position information;
determining target geographical position information according to the plurality of geographical position information, wherein the distance between the area where the target geographical position information is located and the target area where the geographical position mark is located meets a preset condition;
and taking the target geographical position information as the target destination information.
3. The method of claim 1, wherein displaying navigation path information comprises: displaying the navigation path information in a form of a suspension control;
after the displaying the navigation path information in the form of the hover control, the method further includes:
receiving a second input of the navigation path information by the user;
and responding to the second input, zooming out and displaying the suspension control, and performing route navigation according to the navigation path information.
4. The method of claim 1, wherein displaying navigation path information comprises: displaying the navigation path information in a form of a suspension control;
after the displaying the navigation path information in the form of the hover control, the method further includes:
receiving a third input of the user to the hover control and the first application;
sending navigation information to a target contact in the first application in response to the third input;
wherein the navigation information comprises at least one of: the navigation path information, the position information of the electronic equipment and the predicted arrival time information.
5. The method of claim 1, wherein after generating and displaying navigation path information based on the target start point information and the target end point information, the method further comprises:
acquiring the position information of the electronic equipment in the process of carrying out route navigation according to the navigation path information in a first travel mode;
and outputting prompt information under the condition that the position information of the electronic equipment is preset first position information.
6. A navigation device, characterized in that the device comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image, and the first image comprises a geographical position identifier;
the first receiving module is used for receiving a first input of a user aiming at the geographic position identification;
the second acquisition module is used for responding to the first input and acquiring target destination information corresponding to the geographic position identification;
the generating module is used for generating navigation path information according to target starting point information and the target end point information, wherein the target starting point information is position information of the electronic equipment;
and the display module is used for displaying the navigation path information.
7. The apparatus of claim 6, wherein the second obtaining module comprises:
the character recognition unit is used for responding to the first input, performing character recognition on the first image and acquiring a plurality of geographical position information;
the first determining unit is used for determining target geographical position information according to the plurality of geographical position information, wherein the distance between the area where the target geographical position information is located and the target area where the geographical position mark is located meets a preset condition;
and the second determining unit is used for taking the target geographical position information as the target destination information.
8. The apparatus according to claim 6, wherein the display module is specifically configured to display the navigation path information in the form of a hover control;
the device further comprises:
the second receiving module is used for receiving second input of the navigation path information by the user;
and the display module is also used for responding to the second input, reducing and displaying the suspension control and carrying out route navigation according to the navigation path information.
9. The apparatus according to claim 6, wherein the display module is specifically configured to display the navigation path information in the form of a hover control;
the device further comprises:
the third receiving module is used for receiving a third input of the user to the floating control and the first application;
a sending module, configured to send navigation information to a target contact in the first application in response to the third input;
wherein the navigation information comprises at least one of: the navigation path information, the position information of the electronic equipment and the predicted arrival time information.
10. The apparatus of claim 6, further comprising:
the third acquisition module is used for acquiring the position information of the electronic equipment in the process of performing route navigation according to the navigation path information in the first travel mode;
and the prompt module is used for outputting prompt information under the condition that the position information of the electronic equipment is the preset first position information.
11. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the navigation method of any one of claims 1 to 5.
CN202111373030.3A 2021-11-18 2021-11-18 Navigation method and device and electronic equipment Pending CN114090818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111373030.3A CN114090818A (en) 2021-11-18 2021-11-18 Navigation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111373030.3A CN114090818A (en) 2021-11-18 2021-11-18 Navigation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114090818A true CN114090818A (en) 2022-02-25

Family

ID=80302018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111373030.3A Pending CN114090818A (en) 2021-11-18 2021-11-18 Navigation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114090818A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105277205A (en) * 2015-10-21 2016-01-27 百度在线网络技术(北京)有限公司 Method and device for providing user with public traffic navigation service
CN105764033A (en) * 2016-02-19 2016-07-13 腾讯科技(深圳)有限公司 Information processing method, first terminal, and second terminal
CN106980441A (en) * 2017-03-29 2017-07-25 杭州弗凡科技有限公司 The suspension windows exchange method and vehicle mounted guidance terminal of vehicle mounted guidance terminal
CN109341714A (en) * 2018-10-26 2019-02-15 广东小天才科技有限公司 Navigation method and device based on picture recognition, navigation equipment and storage medium
CN113536100A (en) * 2020-04-17 2021-10-22 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105277205A (en) * 2015-10-21 2016-01-27 百度在线网络技术(北京)有限公司 Method and device for providing user with public traffic navigation service
CN105764033A (en) * 2016-02-19 2016-07-13 腾讯科技(深圳)有限公司 Information processing method, first terminal, and second terminal
CN106980441A (en) * 2017-03-29 2017-07-25 杭州弗凡科技有限公司 The suspension windows exchange method and vehicle mounted guidance terminal of vehicle mounted guidance terminal
CN109341714A (en) * 2018-10-26 2019-02-15 广东小天才科技有限公司 Navigation method and device based on picture recognition, navigation equipment and storage medium
CN113536100A (en) * 2020-04-17 2021-10-22 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
USRE49890E1 (en) Method and apparatus for providing information by using messenger
CN103335657B (en) A kind of method and system based on image capture and recognition technology example of enhanced navigational functionality
US9355496B2 (en) Image processing apparatus, image processing method, and medium to display augmented reality objects
CN109189879B (en) Electronic book display method and device
US8941752B2 (en) Determining a location using an image
CN112269798B (en) Information display method and device and electronic equipment
US20120303265A1 (en) Navigation system with assistance for making multiple turns in a short distance
CN112099704A (en) Information display method and device, electronic equipment and readable storage medium
US20160343156A1 (en) Information display device and information display program
CN113989469A (en) AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium
WO2024088209A1 (en) Position information acquisition method and apparatus
CN108241678B (en) Method and device for mining point of interest data
WO2023134599A1 (en) Voice information sending method and apparatus, and electronic device
CN114090818A (en) Navigation method and device and electronic equipment
US9482546B2 (en) Method and system for providing route information to a destination location
CN113268961A (en) Travel note generation method and device
CN112533146A (en) Navigation database establishing method and device and electronic equipment
CN110069577B (en) Circuit planning method and device and electronic equipment
KR20150129221A (en) System for providing profile
CN114154465B (en) Structure reconstruction method and device of structure diagram, electronic equipment and storage medium
CN112414427B (en) Navigation information display method and electronic equipment
KR102080872B1 (en) Device for providing map service and method for providing map service
CN117407135A (en) Task execution method and device and electronic equipment
CN117979082A (en) Video processing method and electronic equipment
CN117029836A (en) Navigation route determining method, navigation route determining device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination