WO2017128895A1 - 一种基于场景共享的导航协助方法及终端 - Google Patents

一种基于场景共享的导航协助方法及终端 Download PDF

Info

Publication number
WO2017128895A1
WO2017128895A1 PCT/CN2016/111558 CN2016111558W WO2017128895A1 WO 2017128895 A1 WO2017128895 A1 WO 2017128895A1 CN 2016111558 W CN2016111558 W CN 2016111558W WO 2017128895 A1 WO2017128895 A1 WO 2017128895A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
information
scene
interface
target point
Prior art date
Application number
PCT/CN2016/111558
Other languages
English (en)
French (fr)
Inventor
沈慧海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020187008449A priority Critical patent/KR102046841B1/ko
Priority to BR112018008091-8A priority patent/BR112018008091A2/zh
Publication of WO2017128895A1 publication Critical patent/WO2017128895A1/zh
Priority to US15/923,415 priority patent/US10959049B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/03Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers
    • G01S19/10Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers providing dedicated supplementary positioning signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position

Definitions

  • the embodiments of the present invention relate to the field of communications, and in particular, to a navigation assistance method and a terminal based on scenario sharing.
  • GPS Global Positioning System
  • the main purpose of the Global Positioning System is to provide real-time, all-weather and global navigation services for the three major areas of land, sea and air, and for military purposes such as intelligence gathering, nuclear explosion monitoring and emergency communications. After more than 20 years of The research experiment cost 30 billion US dollars. By 1994, 24 GPS satellite constellations with a global coverage rate of 98% had been completed.
  • GPS can provide users with functions such as vehicle positioning, anti-theft, anti-robbery, driving route monitoring and call command through the terminal positioning system.
  • the terminal positioning system is a technology or service for identifying the location of the located object on the electronic map by acquiring the location information (latitude and longitude coordinates) of the mobile phone or the terminal pathfinder through a specific positioning technology.
  • helper When a helper wants to reach a goal point, he usually describes the current situation in the language to the facilitator and hopes that the facilitator will guide himself. However, the helper cannot effectively and accurately describe the current situation to the facilitator through the language, so that the helper cannot give the prompt information for helping the helper to reach the target point, or the helper gives the wrong prompt information. .
  • Embodiments of the present invention provide a navigation assistance method and a terminal based on scene sharing, which are used to make a helper more accurately describe a scene in which they are located in a simple manner, so that the helper can provide a more accurate help for the helper. A message to reach the target point.
  • the embodiment of the present invention provides a navigation assistance method based on scene sharing, including: a first terminal sharing a scene image interface of a scene in which the first terminal is currently located to the second terminal; and receiving, by the first terminal, the second terminal
  • the prompt information is sent; the first terminal displays the prompt information on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is the first
  • the second terminal is determined according to the shared scene image interface and the target point.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby making the user of the first terminal simpler and more accurate. Determine the meaning of the prompt information, and then find the target point through the prompt information more quickly and conveniently.
  • the method further includes: the first terminal sends the help information to the second terminal, where the help information includes information of the target point to be searched by the first terminal. Specifically, after the first terminal shares the scene image interface of the scene in which the first terminal is currently located to the second terminal, the first terminal sends the help information to the second terminal. Or the first terminal sends the help information to the second terminal, where the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal.
  • the first terminal shares the scene image interface of the scene in which the first terminal is currently located to the second terminal, and the second terminal sends a task request to the first terminal according to the needs of the second terminal, that is, the second terminal is based on the sharing.
  • the scene image interface of the scene in which the first terminal is currently located, the second terminal generates prompt information, and sends prompt information to the first terminal.
  • the scene image interface includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or including the current location of the first terminal
  • the GPS map interface of the location of the scene includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or including the current location of the first terminal
  • the GPS map interface of the location of the scene includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or including the current location of the first terminal.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; audio information used to prompt the location of the target point or to prompt a specific path to the target point.
  • the method further includes: receiving, by the first terminal, the location of the marker information sent by the second terminal in the scene image interface.
  • the first terminal displays the prompt information on the scene image interface
  • the first terminal displays the marker information at a corresponding position of the scene image interface displayed by the first terminal according to the received location information.
  • the second terminal can more conveniently add the prompt information on the scene image interface
  • the user of the first terminal can more easily understand the meaning of the prompt information added by the second terminal.
  • the method further includes: acquiring, by the first terminal, the camera device connected to the first terminal to move a first mobile data; the first terminal converts the first mobile data into the second mobile data that is moved by the tag information; and the first terminal moves the tag information displayed on the scene image interface according to the converted second mobile data, so that The moved marker information matches the scene image interface captured by the moved imaging device. In this way, the accuracy of the prompt information can be guaranteed, thereby preventing the prompt information from becoming inaccurate as it moves.
  • the first terminal acquires the first mobile data that is moved by the imaging device that is connected to the first terminal, and includes: the first terminal acquires, by using the acceleration sensor and the gyro sensor, the camera device connected to the first terminal to move A mobile data.
  • the purpose of the prompt information moving with the movement of the first terminal can be achieved by a simple method.
  • the tag information is any one of the following tags or a combination of any of the following: a curve tag letter Information, line mark information, line mark information with arrows, closed graphic mark information.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the display interface of the first terminal display scene image interface includes a first area and a second area, where the first area is used to display an image capturing device connected to the first terminal to capture a scene currently occupied by the first terminal.
  • the second area is used to display the GPS map interface including the location of the scene where the first terminal is currently located or the content is not displayed; or the first area is used to display the location including the scene where the first terminal is currently located
  • the GPS map interface is configured to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the scene currently located by the first terminal or not to display the content.
  • the method further includes: when the displayed scene image interface is touched, the first terminal switches the content displayed by the first area and the second area.
  • the method further includes: the first terminal sending a help request to the second terminal; and the first terminal receiving the second terminal returning Receiving a help-response response; wherein the accepting help response is used to establish an interface sharing connection between the first terminal and the second terminal.
  • the method further includes: the first terminal receives the updated prompt information sent by the second terminal; and the first terminal updates the displayed image in the scene by using the updated prompt information.
  • the first terminal can continue to receive the prompt information.
  • the second terminal sends the prompt information to cause the first terminal to turn left, and after the first terminal turns left, another intersection is encountered.
  • the second terminal sends the prompt information to the first terminal, and then goes to the right.
  • the first terminal sends the prompt information to the first terminal.
  • the prompt information can also be updated in real time, so that the first terminal can provide more accurate prompt information.
  • the embodiment of the present invention provides a navigation assistance method based on scene sharing, which includes: receiving, by a second terminal, a scene image interface of a scene currently occupied by the first terminal shared by the first terminal; The second terminal determines, according to the shared scene image interface, a prompt information for prompting the location of the target point to be searched by the first terminal or prompting a specific path to the target point; the second terminal sends the prompt information to the first terminal, The first terminal is caused to display the prompt information on the scene image interface.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby making the user of the first terminal simpler and more accurate. Determine the meaning of the prompt information, and then find the target point through the prompt information more quickly and conveniently.
  • the second terminal further includes: the second terminal Receiving the help information sent by the first terminal, the help information includes the information of the target point to be searched by the first terminal, and the second terminal determines the location or prompt for prompting the target point to be searched by the first terminal according to the shared scene image interface.
  • the prompt information of the specific path to the target point includes: the second terminal determines, according to the shared scene image interface and the information of the target point in the help information, a location for prompting the target point of the first terminal to be searched or Prompting for a specific path to the target point.
  • the first terminal sends the help information to the second terminal.
  • the first terminal sends the help information to the second terminal, where the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal.
  • the first terminal shares the scene image interface of the scene in which the first terminal is currently located to the second terminal, and the second terminal sends a task request to the first terminal according to the needs of the second terminal, that is, the second terminal is based on the sharing.
  • the scene image interface of the scene in which the first terminal is currently located, the second terminal generates prompt information, and sends prompt information to the first terminal.
  • the scene image interface includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or a location including the current scene of the first terminal. GPS map interface.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; audio information used to prompt the location of the target point or to prompt a specific path to the target point.
  • the method further includes: the second terminal sending, to the first terminal, location information of the marker information in the scene image interface.
  • the location information is used to cause the first terminal to display the tag information at a corresponding position of the scene image interface displayed by the first terminal according to the received location information.
  • the scene image interface is a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and the second terminal is configured to prompt the first terminal according to the shared scene image interface.
  • the location information of the target point to be searched or the prompt information of the specific path to the target point includes: when the second terminal receives the first operation instruction, the video interface displayed by the second terminal is locked into a static picture; The second terminal displays the received location for prompting the target point of the first terminal to be searched or the touch track of the specific path to the target point on the still picture; the second terminal sets the touch track as the tag information, and/ Or generating text information of the specific path according to the touch track, and restoring the locked still picture to the video interface shared by the first terminal.
  • the first operation instruction is that the video interface displayed by the second terminal is double-clicked or clicked.
  • the method further includes: acquiring, by the second terminal, the first mobile data that is moved by the camera device connected to the first terminal; Converting the first movement data into the second movement data that is moved by the marker information; and the second terminal moves the marker information displayed on the scene image interface according to the converted second movement data, so that the moved marker information is after the movement
  • the scene image interface captured by the camera device is matched. In this way, the accuracy of the prompt information can be guaranteed, thereby preventing the prompt information from becoming inaccurate as it moves.
  • the second terminal determines a location for prompting the target point of the first terminal to be searched or After prompting the prompt information of the specific path of the target point, the method further includes: displaying the mark information on the layer parallel to the display plane of the scene image interface.
  • the first movement data is data acquired by the first terminal through the acceleration sensor of the first terminal and the gyro sensor. In this way, the purpose of the prompt information moving with the movement of the first terminal can be achieved by a simple method.
  • the tag information is any one or a combination of any of the following: curve tag information, line tag information, line tag information with an arrow, and closed pattern tag information.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the display interface of the second terminal display scene image interface includes a first area and a second area, where the first area is used to display an image capturing device connected to the first terminal to capture a scene currently occupied by the first terminal.
  • the second area is used to display the GPS map interface including the location of the scene where the first terminal is currently located or the content is not displayed; or the first area is used to display the location including the scene where the first terminal is currently located
  • the GPS map interface is configured to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the scene currently located by the first terminal or not to display the content.
  • the method further includes: when the displayed scene image interface is touched, the second terminal switches the content displayed by the first area and the second area.
  • the second terminal before receiving, by the second terminal, the scene image interface of the scene where the first terminal is currently located, the second terminal further includes: the second terminal receiving the help request sent by the first terminal; and the second terminal sending the first terminal to the first terminal Receiving a help-response response; wherein the accepting help response is used to establish an interface sharing connection between the first terminal and the second terminal.
  • the method further includes: the second terminal modifying the prompt information to obtain the updated prompt information; and the second terminal sends the updated prompt information to the first terminal,
  • the first terminal is caused to update the prompt information displayed on the scene image interface by using the updated prompt information.
  • the first terminal can continue to receive the prompt information. For example, at the current intersection of the first terminal, the second terminal sends the prompt information to cause the first terminal to turn left, and after the first terminal turns left, another intersection is encountered.
  • the second terminal sends a prompt message to the first terminal again, If the first terminal is to go to the right, the second terminal may send the prompt information to the first terminal multiple times in the embodiment of the present invention, and may also update the prompt information in real time, thereby providing the first terminal with more Accurate prompt information.
  • the embodiment of the present invention provides a navigation assisting terminal based on a scene sharing, which is used to implement any one of the foregoing first aspects, including a corresponding functional module, which is used to implement the steps in the foregoing method.
  • an embodiment of the present invention provides a method, where the terminal for a time domain resource unit set structure is used to implement any one of the foregoing second aspects, and a corresponding function module, which is used to implement the foregoing method. step.
  • an embodiment of the present invention provides a navigation assistance terminal based on scene sharing, where the terminal includes a transmitter, a receiver, a memory, and a processor; the memory is configured to store an instruction, and the processor is configured to execute according to the implementation Decoding a memory stored instruction, and controlling the transmitter and the receiving to perform signal reception and signal transmission, and when the processor executes the instruction stored in the memory, the terminal is configured to perform any of the foregoing first aspects a way.
  • an embodiment of the present invention provides a navigation assisting terminal based on a scene sharing, where the terminal includes a transmitter, a receiver, a memory, and a processor; the memory is configured to store an instruction, and the processor is configured to execute according to the implementation Decoding a memory stored instruction, and controlling the transmitter and the receiving to perform signal reception and signal transmission, and when the processor executes the instruction stored by the memory, the terminal is configured to perform any of the foregoing second aspects a way.
  • the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal; the first terminal receives the prompt information sent by the second terminal; the first terminal displays the prompt information in the scene image.
  • the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is determined by the second terminal according to the shared scene image interface and the target point.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further
  • the prompt information is displayed on the scene image interface in the first terminal, so that the user of the first terminal can more easily and accurately determine the meaning of the prompt information, thereby finding the target point through the prompt information more quickly and conveniently.
  • FIG. 1 is a schematic structural diagram of a system applicable to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a navigation assistance method based on scene sharing according to an embodiment of the present invention
  • FIG. 2a is a schematic diagram of a scene image interface according to an embodiment of the present invention.
  • FIG. 2b is a schematic diagram of another scene image interface according to an embodiment of the present invention.
  • 2c is a schematic diagram of another scene image interface according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of another scene image interface according to an embodiment of the present invention.
  • 2 e is a schematic diagram of another scene image interface according to an embodiment of the present invention.
  • 2f is a schematic diagram of another scene image interface according to an embodiment of the present invention.
  • 2g is a schematic diagram of display mark information on a layer parallel to a display plane of a scene image interface according to an embodiment of the present invention
  • 2h is a schematic diagram of a display interface after the first terminal moves according to an embodiment of the present invention.
  • 2i is a schematic diagram of a display interface of a first terminal displaying a scene image interface according to an embodiment of the present invention
  • 2j is a schematic diagram of a display interface of a second terminal displaying a scene image interface according to an embodiment of the present invention
  • FIG. 2k is a schematic diagram of a display interface after the second terminal generates the prompt information on FIG. 2j according to the embodiment of the present invention
  • FIG. 21 is a schematic diagram of a display interface after the first terminal receives the prompt information according to the embodiment of the present invention.
  • 2m is a schematic diagram of a display interface of a first terminal displaying a scene image interface according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a display interface of a second terminal displaying a scene image interface according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a display interface after the second terminal generates the prompt information on FIG. 2n according to the embodiment of the present invention
  • 2p is a schematic diagram of a display interface after the first terminal receives the prompt information according to the embodiment of the present invention
  • FIG. 2 is a schematic diagram of a display interface according to an embodiment of the present invention.
  • 2r is a schematic diagram of another display interface according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a display interface according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a navigation assisting terminal based on scene sharing according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of another navigation assisting terminal based on scene sharing according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of another navigation assisting terminal based on scene sharing according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of another navigation assisting terminal based on scene sharing according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram showing a system architecture applicable to an embodiment of the present invention.
  • a system architecture applicable to an embodiment of the present invention includes a first terminal 101 and a second terminal 102.
  • the first terminal 101 can establish a connection with the second terminal 102.
  • the first terminal 101 can share the scene image interface to the second terminal 102, and the second terminal 102 can directly generate prompt information on the shared scene image interface.
  • the prompt information is sent to the first terminal 101.
  • the first terminal 101 can transmit instant text information and/or audio information to and from the second terminal 102.
  • the scenarios applicable to the embodiments of the present invention include multiple types, such as a target point being a specific location, or a target object.
  • a target point being a specific location, or a target object.
  • mall A at this time, the first terminal will be the scene where the first terminal is currently located.
  • the scene image interface is shared with the second terminal, and the second terminal can clearly and accurately determine the current location of the first terminal, and then indicate accurate prompt information for the first terminal, so that the first terminal reaches the mall A.
  • the prompt message can prompt the specific path to the target point.
  • the prompt information may be marker information, such as drawing a line with an arrow, indicating a specific path to the target point, or drawing some circle on a specific path on the specific path to the target point, such as a landmark building.
  • the specific path used to prompt the arrival of the target point is an intersection, one of which has an iconic building A.
  • the second terminal can draw a circle on the iconic building A on the iconic building A.
  • the circle drawn is the tag information, which can tell the first terminal that the path where the landmark building A should be located.
  • the target point is a service to be repaired by a maintenance employee.
  • the first terminal arrives at the location where the maintenance device is located, many such devices are found. At this time, the first terminal will be the first terminal currently.
  • the scene image interface of the scene is shared with the second terminal, and the second terminal can accurately indicate to the first terminal which device is the device to be repaired based on the shared scene image interface, that is, the second terminal is first
  • the prompt information sent by the terminal may prompt the location of the target point, and the prompt information may be marker information indicating the location of the target point, such as drawing a circle on the target point; or the prompt information is text information, such as a paragraph of text "scene image interface"
  • the second device is the target point; or the prompt message is audio information, such as such an audio "the second device on the scene image interface is the target point".
  • the second terminal sends a task to the first terminal, for example, after the first terminal reaches a location that can attack the target point, the second terminal sends the method to the first terminal by using the method provided by the embodiment of the present invention.
  • the prompt information is sent, and the prompt information can clearly indicate to the first terminal which target point is the target point of the first terminal to be attacked.
  • the “first” and “second” in the first terminal and the second terminal in the embodiment of the present invention are only used to distinguish different terminals, and are not limited.
  • the first terminal in the embodiment of the present invention may be any one of the terminals, and the second terminal may be any one of the terminals.
  • the terminal terminal to which the present invention relates is a device that provides voice and/or data connectivity to a user, including a wireless terminal or a wired terminal.
  • the wireless terminal can be a handheld device with wireless connectivity or connected to wireless modulation Other processing devices of the demodulator, mobile terminals that communicate with one or more core networks via a radio access network.
  • the wireless terminal can be a mobile phone (or "cellular" phone) and a computer with a mobile terminal.
  • the wireless terminal can also be a portable, pocket, handheld, computer built-in or in-vehicle mobile device.
  • the wireless terminal can be part of a mobile station, an access point, or a user equipment (UE).
  • UE user equipment
  • FIG. 2 is a schematic flowchart diagram of a navigation assistance method based on scene sharing according to an embodiment of the present invention.
  • a method for assisting navigation based on scene sharing includes the following steps:
  • Step 201 The first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal;
  • Step 202 The first terminal receives the prompt information sent by the second terminal.
  • Step 203 The first terminal displays the prompt information on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is that the second terminal is shared according to the second terminal.
  • the scene image interface and target points are determined.
  • the prompt information may prompt a specific path to the target point.
  • the prompt information may be marker information, such as drawing a line with an arrow, indicating a specific path to the target point, or drawing some circle on a specific path on the specific path to the target point, such as a landmark building.
  • the prompt information may also prompt the location of the target point, and the prompt information may be marker information indicating the location of the target point, such as drawing a circle on the target point; or the prompt information is text information, such as a paragraph of text "the scene image interface"
  • the two devices are the target points; or the prompt information is audio information, such as such an audio "the second device on the scene image interface is the target point".
  • FIG. 2a is a schematic diagram showing a scene image interface provided by an embodiment of the present invention
  • FIG. 2b is a schematic diagram showing another scene image interface provided by an embodiment of the present invention.
  • the prompt information is information indicating a specific path to the target point, such as the tag information 2101 and Text message 2102.
  • the prompt information is a specific path for prompting to reach the target point.
  • a circle 2104 is drawn on the building 2103, which may also indicate a specific path to the target point, that is, in the building in FIG. 2d.
  • a circle 2104 is drawn on the object 2103 to indicate that the specific path to the target point is the road to which the building 2103 belongs.
  • the first terminal sends the help information to the second terminal, where the help information includes information of the target point to be searched by the first terminal.
  • the information of the target point may be an identifier of the target point, such as the commercial building A and the like.
  • the first terminal sends the help information to the second terminal.
  • the first terminal sends the help information to the second terminal, where the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal.
  • the first terminal shares the scene image interface of the scene in which the first terminal is currently located to the second terminal, and the second terminal sends a task request to the first terminal according to the needs of the second terminal, that is, the second terminal is based on the sharing.
  • the scene image interface of the scene in which the first terminal is currently located, the second terminal generates prompt information, and sends prompt information to the first terminal.
  • the second terminal receives the scene image interface of the scene where the first terminal is currently shared by the first terminal, and the second terminal determines, according to the shared scene image interface, that the first terminal is used to prompt The location of the target point sought or the prompt information of the specific path to the target point; the second terminal sends the prompt information to the first terminal, so that the first terminal displays the prompt information on the scene image interface.
  • the second terminal receives the first information according to the shared scene image interface, before determining the location of the target point to be searched for by the first terminal or the prompting information of the specific path to the target point.
  • the help information sent by the terminal, the help information includes information of the target point to be searched by the first terminal, and the second terminal determines, according to the information of the shared scene image interface and the target point in the help information, that the first terminal is to be found. The location of the target point or the prompt information of the specific path to the target point.
  • the second terminal is based on the shared scene image interface, and the second terminal determines the target point by itself and sends the prompt information to the first terminal, for example, the second terminal according to the current actual situation.
  • the task of issuing a maintenance target point to the first terminal is performed.
  • the first terminal user can share the scene image interface of the scene in which the first terminal is currently located to the second terminal, so that the user of the first terminal can describe the scene in which the user is located through the scene image interface.
  • the second terminal according to the shared scene image interface and the received help information, can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby enabling the first terminal to The user can more easily and accurately determine the meaning of the prompt information, and then find the target point through the prompt information more quickly and conveniently.
  • the second terminal determines, according to the shared scene image interface, prompt information for prompting the location of the target point to be searched by the first terminal or prompting a specific path to the target point, For example, the second terminal matches a preset image interface in the database according to the shared scene image interface, and determines a more accurate current location of the first terminal according to the matched preset image interface. The second terminal may further match the prompt information between the target point and the location matching the preset image interface from the database, and send the prompt information to the first terminal.
  • the user of the second terminal determines, according to the shared scene image interface, the current location of the user of the first terminal, and then determines, according to the target point, that the first terminal is to be prompted.
  • the location of the target point sought or the prompt information of the specific path to the target point.
  • the power of the human hand can be relied upon to further improve the accuracy of the prompt information, instead of relying solely on the power of the local software system in the prior art.
  • the prompt information cannot be generated for the first terminal only by relying on the power of the local software system.
  • the local software system when the target point is a certain store inside the mall, the local software system usually cannot reflect the details inside the mall, so it is impossible to indicate to the user how to reach the prompt information of a certain store inside the mall.
  • the local software system in the prior art when the first terminal needs to pass through a viaduct to reach the target point, the local software system in the prior art usually cannot clearly indicate the first layer of the viaduct.
  • the method provided by the embodiment of the present invention can rely on human resources to provide more accurate prompt information for the user when the local software system cannot support the method.
  • the second terminal may modify the generated prompt information in real time to update the prompt information, and may also add some prompt information.
  • the first terminal receives the updated prompt information sent by the second terminal; the first terminal updates the prompt information displayed on the scene image interface by using the updated prompt information; wherein the updated prompt information is displayed in the pair The prompt information on the scene image interface is modified.
  • the second terminal modifies the prompt information to obtain the updated prompt information; the second terminal sends the updated prompt information to the first terminal, so that the first terminal
  • the terminal updates the prompt information displayed on the scene image interface by using the updated prompt information.
  • the real-time update of the prompt information can be implemented, thereby enabling the first terminal to find the target point to be searched more accurately and in time.
  • the first terminal sends a help request to the second terminal; the first terminal receives the accepting help response returned by the second terminal. That is, the second terminal receives the help request sent by the first terminal; the second terminal sends the accepting help response to the first terminal.
  • the accepting help response is used to establish an interface sharing connection between the first terminal and the second terminal.
  • the first terminal initiates a help request to the second terminal
  • the display interface of the second terminal displays a request for assistance initiated by the first terminal
  • the second terminal further has two buttons for accepting and rejecting, and the second terminal If the user selects the receiving button, the second terminal sends a receiving help response to the first terminal, and the first terminal establishes a connection with the second terminal.
  • the first terminal may send the scene image interface of the scene where the first terminal is currently located to the second terminal, and the first terminal may also send some text information to the second terminal.
  • the first terminal may also send some audio information to the second terminal.
  • the first terminal sends a piece of audio information to the second terminal as "I want to go to the mall A, help me to point the way?", or send a text message " Turn left or right?”, "Left or right?”.
  • the scene image interface includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or a location including the current scene of the first terminal. GPS map interface.
  • the camera device connected to the first terminal captures the scene currently located by the first terminal, and may perform image capturing or video shooting. If the image is captured, the captured image interface may be shared to the second terminal. If the video is captured, the captured video interface can be shared with the second terminal, for example, the first terminal and the second terminal initiate a video call, and the interfaces of the first terminal and the second terminal respectively display the current location of the first terminal.
  • FIG. 2 is a schematic diagram showing a scenario image interface provided by an embodiment of the present invention.
  • the first terminal displays an image interface or a video interface currently captured by the first terminal camera on the display interface of the first terminal, first.
  • the content displayed on the display interface of the terminal is as shown in FIG. 2c.
  • the first terminal shares the image interface or the video interface currently captured by the first terminal camera to the second terminal, and the content and the image displayed on the display interface of the second terminal.
  • the content shown in 2c is the same.
  • FIG. 2d is a schematic diagram showing another scene image interface provided by an embodiment of the present invention.
  • the first terminal displays a GPS map interface including the location of the scene where the first terminal is currently located on the display interface of the first terminal.
  • the content displayed on the display interface of the first terminal is as shown in FIG. 2d, and the first terminal shares the image interface or video interface currently captured by the first terminal camera to the second terminal, and the content displayed on the display interface of the second terminal Consistent with the content shown in Figure 2d.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; audio information used to prompt the location of the target point or to prompt a specific path to the target point.
  • the marker information may be used to prompt the location of the target point, such as drawing a circle on the target point, or prompting a specific path to the target point, such as drawing a line with an arrow on a specific path to the target point, or Draw a circle on the iconic building on the specific path to the target point.
  • text information or audio information can also be used to indicate the location of the target point, or to prompt the arrival of the target point. The specific path.
  • FIG. 2 e is a schematic diagram showing another scene image interface provided by an embodiment of the present invention.
  • the scene image interface is an image interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal or In the video interface, the second terminal adds the prompt information on the scene image interface 2301 shared by the first terminal, and the scene image interface after the second terminal adds the prompt information on the display interface of the second terminal is consistent with the content shown in FIG. 2e.
  • the prompt information includes flag information 2302 for prompting a specific path to the target point, text information 2303 for prompting a specific path to the target point, and text information for prompting a specific path to the target point. 2304.
  • the tag information is tag information generated by a touch operation on the shared scene image interface, for example, the tag information 2302 is a curve with an arrow.
  • the text information 2303 for prompting the specific path to the target point may be a left word written by touch on the scene image interface.
  • Text information 2304 for prompting a specific path to the target point is "go left" in Figure 2e.
  • the "go left” can also be audio information. That is to say, the text information 2303 for prompting the specific path to the target point in the embodiment of the present invention may be text information for generating a specific path according to the touch track, or may be a regular text information, such as writing in an instant message box. Enter "go left".
  • the text information 2304 is located in an instant message dialog box, and the dialog box can be used to transmit text information and audio information communicated between the first terminal and the second terminal.
  • FIG. 2f is a schematic diagram showing another scene image interface provided by an embodiment of the present invention.
  • the scene image interface is a GPS map interface including a location of a scene where the first terminal is currently located.
  • the second terminal adds the prompt information on the scene image interface 2401 shared by the first terminal, and the scene image interface after the second terminal adds the prompt information on the display interface of the second terminal is consistent with the content shown in FIG. 2f, as shown in FIG. 2f.
  • the prompt information includes the mark information 2402 for prompting the specific path to the target point.
  • the mark information is mark information generated by a touch operation on the shared scene image interface, for example, the mark information 2402 is a band. The curve of the arrow.
  • the text information 2403 for prompting the specific path to the target point is "go left” in Fig. 2f.
  • the "go left” can also be audio information.
  • the text information 2403 for prompting the specific path to the target point is located in an instant message dialog box, and the dialog box is configured to transmit the first terminal and the second terminal. Text information and audio information are communicated between.
  • the second terminal may be Add a hint on the dynamic video interface.
  • Another optional implementation manner is: when the scene image interface is a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, the second terminal receives the first operation.
  • the video interface displayed by the second terminal is locked as a static picture; the second terminal displays the received location for prompting the target point of the first terminal to be searched or the specific path of the prompt to reach the target point on the static picture. a touch track; the second terminal sets the touch track as the tag information, and/or generates the text information of the specific path according to the touch track, and restores the locked still picture to the video interface shared by the first terminal.
  • the first operation instruction is that the video interface displayed by the second terminal is double-clicked or clicked. The second terminal is double-clicked or clicked between the static picture or video interface through the video interface, and the second terminal can be more clearly and conveniently generated on the display interface of the second terminal when the video interface is locked into a static picture. Prompt message.
  • the mark information is any one of the following marks or a combination of any of the following: curve mark information, line mark information, line mark information with an arrow, and closed figure. Mark information.
  • the marker information added on the image interface or the video interface is one.
  • the graphic mark information is closed, and the area in the circle can represent the target point, and the curve mark information, the line mark information, and the line mark information with an arrow can specifically represent the path.
  • Clicking on a location on the display interface can also indicate that the location is the location of the target point or the direction of the location of the target point. Double-clicking on a location on the display interface locks the video interface to a static image or from a locked static image to a video interface.
  • the scene image interface includes a GPS map interface including the location of the scene where the first terminal is currently located
  • the marker information added on the GPS map interface is a circle, that is, the graphic marker information is closed
  • the area in the circle is Can represent target points, curve marker information, line marker information, and
  • the line mark information with an arrow can specifically indicate the path. Clicking at a location on the display interface indicates where the location is the target point. Double clicking on the GPS map interface can calculate the path length from the position of the scene where the first terminal is currently located on the GPS map interface to the target point.
  • the zooming and moving of the map can be realized by the opening and closing touch gesture of the two fingers on the interface on the GPS map interface.
  • the first terminal receives the location information of the marking information sent by the second terminal in the scene image interface; the first terminal is according to the received location.
  • the information displays the tag information at a corresponding position of the scene image interface displayed by the first terminal. That is, the second terminal sends the location information of the tag information in the scene image interface to the first terminal; wherein the location information is used to enable the first terminal to correspond to the scene image interface displayed by the first terminal according to the received location information.
  • the tag information is displayed at the location.
  • the marker information is to be moved as the position of the first terminal moves, for example, the second terminal is at the second terminal.
  • Marking information is generated on the display interface, such as the marker information 2302 shown in FIG. 2e. It can be seen that the marker information 2302 is a curve with an arrow, and the marker information 2302 is located on the leftmost road.
  • the first terminal receives The first terminal further needs to receive the location information of the tag information sent by the second terminal in the scene image interface, and then the first terminal displays the scene image displayed on the first terminal according to the received location information.
  • the tag information is displayed at the corresponding position of the interface, that is, the first terminal finally needs to display the execution tag information 2302 on the leftmost road on the display interface of the first terminal, so that the user of the first terminal can be intuitive. I understand how I should go.
  • the text information 2303 for prompting the specific path to the target point is generated according to the touch trajectory, and the first terminal also receives the scene image generated by the touch trajectory for prompting the text information of the specific path to the target point. Position information in the interface; the first terminal displays, according to the received location information, text information generated according to the touch track for prompting a specific path to the target point at a corresponding position of the scene image interface displayed by the first terminal. That is to say, the first terminal also displays the "left" of the text information 2303 for prompting the specific path to the target point on the leftmost road of the display interface of the first terminal.
  • the display interface of the second terminal is consistent with the content shown in FIG. 2 e.
  • the first terminal receives the prompt information.
  • the image displayed on the display interface of the first terminal also coincides with the content shown in FIG. 2e.
  • the first terminal after the second terminal generates the prompt information and sends the prompt information to the first terminal, the first terminal usually moves according to the prompt information.
  • the prompt information also needs to move along with the movement of the first terminal.
  • the shared scene image interface is a video interface captured by the camera device connected to the first terminal
  • the prompt information is a circle located on the building A.
  • the first terminal moves, the first The video interface captured by the camera device connected to the terminal will also change.
  • the building A will also move along with the video interface captured by the camera device.
  • the building is located in the building in the embodiment of the present invention.
  • a circle on A also moves, and the circle is always on building A.
  • the prompt information moves along with the movement of the first terminal, and specifically has multiple implementation manners, such as an image target tracking algorithm, capturing the prompt information located on the building A from the perspective of image recognition, and then identifying In the image, the prompt information is always on the building A.
  • the first terminal acquires first mobile data that is moved by the imaging device connected to the first terminal, and the first terminal converts the first mobile data into second mobile data that is marked and moved.
  • the first terminal moves the marker information displayed on the scene image interface according to the converted second motion data, so that the moved marker information matches the scene image interface captured by the moved imaging device.
  • the first terminal displays the tag information at a corresponding position on the layer parallel to the display plane of the scene image interface.
  • the first terminal acquires, by using the acceleration sensor and the gyro sensor, the first movement data that is moved by the imaging device connected to the first terminal.
  • the second terminal acquires the first mobile data that is moved by the camera device connected to the first terminal; the second terminal converts the first mobile data into the second mobile data that is moved by the tag information;
  • the second movement data moves the marker information displayed on the scene image interface to match the moved marker information with the scene image interface captured by the moved imaging device.
  • the display label of the second terminal on the layer parallel to the display plane of the scene image interface Remember the information.
  • the first movement data is data acquired by the first terminal through the acceleration sensor of the first terminal and the gyro sensor.
  • FIG. 2g is a schematic diagram showing display mark information on a layer parallel to the display plane of the scene image interface in the embodiment of the present invention, as shown in FIG. 2g, using Open Graphics Library (OpenGL) technology.
  • the layer 2501 is created at a position parallel to the scene image interface 2301, and the marker information 2302 is displayed on the layer 2501.
  • text information 2303 generated by the touch trajectory is also displayed on the layer 2501.
  • the first terminal acquires first movement data that is moved by the imaging device connected to the first terminal by using the acceleration sensor and the gyro sensor, and the first terminal converts the first movement data into the second movement of the marker information for moving Data, specifically, when the first terminal is moved along the X axis by the acceleration sensor, the marker information needs to be moved left and right on the shared scene image interface, and the first terminal is detected to move along the Y axis by the acceleration sensor. The marker information needs to be moved up and down on the shared scene image interface. When the acceleration sensor detects that the first terminal moves along the Z axis, the marker information needs to be moved back and forth on the shared scene image interface.
  • the mark information needs to be rotated along the X axis on the shared scene image interface
  • the first terminal is rotated along the Y axis by the gyro sensor
  • the mark information needs to be rotated along the Y axis on the shared scene image interface.
  • the gyro sensor detects that the first terminal rotates along the Z axis, the mark information needs to be rotated along the Z axis on the shared scene image interface.
  • the first movement data collected by the gyro sensor and the acceleration sensor needs to be performed first. Low pass filtering.
  • FIG. 2h is a schematic diagram showing a display interface after the first terminal performs the movement in the embodiment of the present invention.
  • the marker information 2302 on the display interface of the first terminal has moved downward, and the display interface of the first terminal and the first terminal
  • the content displayed on the display interface of the two terminals is consistent with the content shown in Figure 2h. In this way, the user of the first terminal can be more clearly and accurately reached the target point.
  • FIG. 2 is a schematic diagram showing a display interface of a first terminal displaying a scene image interface in an embodiment of the present invention
  • FIG. 2j is a schematic diagram showing a display interface of a second terminal displaying a scene image interface according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing a display interface after the second terminal generates the prompt information on FIG. 2j in the embodiment of the present invention
  • FIG. 21 exemplarily shows the display after the first terminal receives the prompt information in the embodiment of the present invention. Interface diagram. As shown in FIG.
  • the display interface of the first terminal displays the scene image interface of the scene where the first terminal is currently located, and the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal.
  • the display interface of the second terminal is shown in Figure 2j.
  • the second terminal displays the prompt information to the first terminal, and the first terminal displays the prompt information on the scene image interface.
  • a schematic diagram of the display interface of the first terminal after the above is shown in FIG.
  • FIG. 2 is a schematic diagram showing a display interface of a first terminal displaying a scene image interface in an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing a display interface of a second terminal displaying a scene image interface according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing a display interface after the second terminal generates the prompt information on FIG. 2n in the embodiment of the present invention
  • FIG. 2p exemplarily shows the display after the first terminal receives the prompt information in the embodiment of the present invention. Interface diagram. As shown in FIG.
  • the display interface of the first terminal displays the scene image interface of the scene where the first terminal is currently located, and the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal.
  • the display interface of the second terminal is shown in Figure 2n.
  • the second terminal displays the prompt information to the first terminal, and the first terminal displays the prompt information on the scene image interface.
  • FIG. 2p A schematic diagram of the display interface of the first terminal after the above is shown in FIG. 2p.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the first terminal displays the display interface of the scene image interface Including the first area and the second area;
  • the first area is used to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the current scene of the first terminal, and the second area is used to display the current scene of the first terminal.
  • the GPS map interface of the location does not display content; or
  • the first area is used to display a GPS map interface including a location where the first terminal is currently located, and the second area is used to display an image interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal. Or the video interface or not displaying content.
  • the first terminal switches the content displayed by the first area and the second area.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the display interface of the second terminal display scene image interface includes a first area and a second area;
  • the first area is used to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the current scene of the first terminal, and the second area is used to display the current scene of the first terminal.
  • the GPS map interface of the location does not display content; or
  • the first area is used to display a GPS map interface including a location where the first terminal is currently located, and the second area is used to display an image interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal. Or the video interface or not displaying content.
  • the second terminal switches the content displayed by the first area and the second area.
  • the second area of the first terminal and/or the second terminal does not display content, and may include multiple implementation forms, such as the second area being a button or the second area being a special area.
  • the display interface of the first terminal and/or the second terminal may not include the second area, that is, only the first area is displayed, and the second area does not display any content.
  • the switching of the content displayed in the first area is realized.
  • the display of the first terminal The first area is displayed on the interface, and the first area is used to display a GPS map interface including the location of the scene where the first terminal is currently located. At this time, the first area is double-clicked, and the content displayed in the first area is switched to include the first area.
  • FIG. 2 is a schematic diagram showing a display interface provided by an embodiment of the present invention
  • FIG. 2r exemplarily shows another display provided by the embodiment of the present invention.
  • the display interface of the first terminal includes a first area 21501 and a second area 21502.
  • the first area of the first terminal displays the camera device connected to the first terminal to the current terminal.
  • the scene is located to obtain an image interface or a video interface
  • the second area of the first terminal is used to display a GPS map interface including the location of the scene where the first terminal is currently located.
  • the display interface of the first terminal is as shown in FIG. 2r, and the second area of the first terminal is displayed first.
  • the image interface or the video interface obtained by the camera connected to the terminal to capture the scene currently located by the first terminal, and the first area of the first terminal is used to display a GPS map interface including the location of the scene where the first terminal is currently located.
  • FIG. 2 s is a schematic diagram showing a display interface provided by an embodiment of the present invention.
  • the display interface of the first terminal or the second terminal can simultaneously display the camera device connected to the first terminal.
  • An image interface or a video interface obtained by capturing a scene currently in which the terminal is currently located, and a GPS map interface including a location of the scene where the first terminal is currently located, and an instant dialog box for the second terminal Real-time conversation.
  • the first terminal starts to establish a connection with the second terminal, and the navigation of the second terminal ends.
  • the first terminal may receive the prompt information sent by the second terminal multiple times, and the second terminal also The prompt information can be modified at any time.
  • the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal; the first terminal receives the prompt information sent by the second terminal; The prompt information is displayed on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is the interface image of the second terminal according to the shared scene. And the target point is determined. Since the first terminal will be the first The scene image interface of the scene in which the terminal is currently located is shared with the second terminal. Therefore, the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the shared scene image interface.
  • the prompt information can be more accurately determined; further, the first terminal displays the prompt information on the scene image interface, so that the user of the first terminal can more easily and accurately determine the meaning of the prompt information. In order to find the target point more quickly and conveniently through the prompt information.
  • FIG. 3 is a schematic structural diagram of a navigation assisting terminal based on scene sharing according to an embodiment of the present invention.
  • the embodiment of the present invention provides a schematic diagram of a structure of a navigation assistance terminal based on scene sharing.
  • the navigation assistance terminal 300 based on the scene sharing includes a sending unit 301, a processing unit 302, and a receiving unit 303.
  • the sending unit 301 is configured to share the scene image interface of the scene where the first terminal is currently located to the second terminal
  • the receiving unit 303 is configured to receive the prompt information sent by the second terminal
  • the processing unit 302 is configured to display the prompt information.
  • the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is determined by the second terminal according to the shared scene image interface and the target point. of.
  • the sending unit 301 is further configured to send the help information to the second terminal before receiving the prompt information sent by the second terminal, where the help information includes information of a target point to be searched by the first terminal.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or a GPS including a location of the scene where the first terminal is currently located. Map interface.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; audio information used to prompt the location of the target point or to prompt a specific path to the target point.
  • the receiving unit 303 is further configured to receive the location information of the tag information sent by the second terminal in the scene image interface, when the prompt information is the location information for prompting the target point or the specific information of the target path.
  • a processing unit 302 configured to: according to the received location information, The tag information is displayed at a corresponding position of the scene image interface displayed by the first terminal.
  • the processing unit 302 is further configured to acquire first mobile data that is moved by the imaging device connected to the first terminal, and convert the first mobile data into second mobile data that is moved by the tag information;
  • the second mobile data moves the marker information displayed on the scene image interface to match the moved marker information with the scene image interface captured by the moved imaging device.
  • the processing unit 302 is configured to display, by the first terminal, the tag information at a corresponding position on the layer parallel to the display plane of the scene image interface.
  • the processing unit 302 is configured to acquire, by the first terminal, the first mobile data that is moved by the imaging device connected to the first terminal by using the acceleration sensor and the gyro sensor.
  • the tag information is any one or a combination of any of the following: curve tag information, line tag information, line tag information with an arrow, and closed pattern tag information.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the processing unit 302 displays the scene image interface
  • the display interface includes a first area and a second area. The first area is used to display the image of the first terminal connected to the first terminal.
  • the second area is used to display the GPS map interface including the location of the scene where the first terminal is currently located or the content is not displayed; or the first area is used to display the location including the scene where the first terminal is currently located
  • the GPS map interface is configured to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the scene currently located by the first terminal or not to display the content.
  • processing unit 302 is further configured to: when the displayed scene image interface is touched, switch the content displayed by the first area and the second area.
  • the sending unit 301 is further configured to: send a help request to the second terminal; the receiving unit 303 is further configured to: receive the accepting help response returned by the second terminal, where the receiving the help response is used to enable the first terminal to An interface sharing connection is established between the second terminals.
  • the receiving unit 303 is further configured to: receive the updated prompt information sent by the second terminal;
  • the processing unit 302 is further configured to: update the displayed on the scene image interface by using the updated prompt information.
  • the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal; the first terminal receives the prompt information sent by the second terminal; The prompt information is displayed on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is the interface image of the second terminal according to the shared scene. And the target point is determined.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby making the user of the first terminal simpler and more accurate. Determine the meaning of the prompt information, and then find the target point through the prompt information more quickly and conveniently.
  • FIG. 4 is a schematic structural diagram of a navigation assisting terminal based on scene sharing according to an embodiment of the present invention.
  • the embodiment of the present invention provides a schematic diagram of a structure of a navigation assisting terminal based on scene sharing.
  • the navigation assisting terminal 400 based on the scene sharing includes a sending unit 401, a processing unit 402, and a receiving unit 403.
  • the receiving unit 403 is configured to receive a scene image interface of the scene where the first terminal is currently shared by the first terminal
  • the processing unit 402 is configured to determine, according to the shared scene image interface, a target for prompting the first terminal to be sought.
  • the location of the point or the prompt information of the specific path to the target point is sent;
  • the sending unit 401 is configured to send the prompt information to the first terminal, so that the first terminal displays the prompt information on the scene image interface.
  • the receiving unit 403 is further configured to receive the help information sent by the first terminal, where the help information includes information of a target point to be searched by the first terminal, and the processing unit 402 is configured to: according to the shared scene image interface and the help information
  • the information of the target point in the medium determines the location information for prompting the location of the target point to be searched by the first terminal or prompting the specific path to the target point.
  • the scene image interface includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or including the current location of the first terminal
  • the GPS map interface of the location of the scene includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or including the current location of the first terminal
  • the GPS map interface of the location of the scene includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or including the current location of the first terminal.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; audio information used to prompt the location of the target point or to prompt a specific path to the target point.
  • the sending unit 401 is further configured to: send the location of the marker information in the scene image interface to the first terminal. Information; wherein the location information is used to cause the first terminal to display the tag information at a corresponding position of the scene image interface displayed by the first terminal according to the received location information.
  • the scene image interface is a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal
  • the processing unit 402 is configured to: when receiving the first operation instruction, The video interface displayed by the second terminal is locked as a static picture; the received position for prompting the target point of the first terminal to be searched or the touch track of the specific path to the target point is displayed on the still picture; Set to mark information, and/or generate text information of a specific path according to the touch track, and restore the locked still picture to a video interface shared by the first terminal.
  • the first operation instruction is that the video interface displayed by the second terminal is double-clicked or clicked.
  • the receiving unit 403 is further configured to: acquire first mobile data that is moved by the camera device that is connected to the first terminal; and the processing unit 402 is further configured to: convert the first mobile data into the tag information to move The second mobile data is moved according to the converted second mobile data, and the marked information displayed on the scene image interface is matched to match the moved marker information with the scene image captured by the moved imaging device.
  • the processing unit 402 is configured to: display the marker information on the layer parallel to the display plane of the scene image interface.
  • the first movement data is data acquired by the first terminal through the acceleration sensor of the first terminal and the gyro sensor.
  • the tag information is any one or a combination of any of the following: curve tag information, line tag information, line tag information with an arrow, and closed pattern tag information.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the display unit 402 displays a first image and a second area on the display interface of the scene image interface. The first area is used to display the image of the first terminal connected to the first terminal.
  • the second area is used to display the GPS map interface including the location of the scene where the first terminal is currently located or the content is not displayed; or the first area is used to display the location including the scene where the first terminal is currently located
  • the GPS map interface is configured to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the scene currently located by the first terminal or not to display the content.
  • processing unit 402 is further configured to: switch the content displayed by the first area and the second area when the displayed scene image interface is touched.
  • the receiving unit 403 is further configured to: receive a help request sent by the first terminal, where the sending unit 401 is further configured to: send an accepting response to the first terminal, where the receiving the help response is used to enable the first terminal to An interface sharing connection is established between the second terminals.
  • the processing unit 402 is further configured to: modify the prompt information to obtain the updated prompt information
  • the sending unit 401 is configured to: send the updated prompt information to the first terminal, so that the first terminal uses the updated information.
  • the prompt information updates the prompt information displayed on the scene image interface.
  • the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal; the first terminal receives the prompt information sent by the second terminal; The prompt information is displayed on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is the interface image of the second terminal according to the shared scene. And the target point is determined.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby making the user of the first terminal simpler and more accurate. Determine the meaning of the prompt information, and then faster And convenient to find the target point through the prompt information.
  • FIG. 5 is a schematic structural diagram of a navigation assisting terminal based on scene sharing according to an embodiment of the present invention. Based on the same concept, the embodiment of the present invention provides a schematic diagram of a structure of a navigation assistance terminal based on scene sharing. As shown in FIG. 5, the navigation assistance terminal 500 based on scene sharing includes a processor 501, a transmitter 503, a receiver 504, and a memory. 502.
  • the processor 501 is configured to read a program in the memory, and perform the following process: sharing, by the transmitter 503, a scene image interface of the scene where the first terminal is currently located to the second terminal; and receiving, by the receiver 504, the second terminal,
  • the prompt information is displayed on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or to prompt a specific path to the target point, and the prompt information is that the second terminal is shared according to the The scene image interface and the target point are determined.
  • the transmitter 503 is further configured to: before receiving the prompt information sent by the second terminal, send the help information to the second terminal, where the help information includes information of a target point to be searched by the first terminal.
  • the scene image interface includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or a location including the current scene of the first terminal. GPS map interface.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; audio information used to prompt the location of the target point or to prompt a specific path to the target point.
  • the receiver 504 is further configured to: receive the location of the marker information sent by the second terminal in the scene image interface.
  • the processor 501 is configured to display the tag information at a corresponding position of the scene image interface displayed by the first terminal according to the received location information.
  • the processor 501 is further configured to: acquire first mobile data that is moved by the camera device connected to the first terminal; and convert, by the first mobile data, second mobile data that is moved by the tag information; The second movement data moves the marker information displayed on the scene image interface to match the moved marker information with the scene image interface captured by the moved imaging device.
  • the processor 501 is configured to: display, by the first terminal, the tag information at a corresponding position on a layer parallel to a display plane of the scene image interface.
  • the processor 501 is configured to: acquire, by the first terminal, the first mobile data that is moved by the imaging device connected to the first terminal by using the acceleration sensor and the gyro sensor.
  • the tag information is any one or a combination of any of the following: curve tag information, line tag information, line tag information with an arrow, and closed pattern tag information.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the display interface of the scene image interface of the processor 501 includes a first area and a second area, where the first area is used to display an image capturing device connected to the first terminal to capture a scene currently located by the first terminal.
  • the second area is used to display the GPS map interface including the location of the scene where the first terminal is currently located or the content is not displayed; or the first area is used to display the location including the scene where the first terminal is currently located
  • the GPS map interface is configured to display an image interface or a video interface obtained by the camera device connected to the first terminal to capture the scene currently located by the first terminal or not to display the content.
  • the processor 501 is further configured to: switch the content displayed by the first area and the second area when the displayed scene image interface is touched.
  • the sender 503 is further configured to: send a help request to the second terminal;
  • the receiver 504 is further configured to: receive an accepting help response returned by the second terminal, where the accepting help response is used to establish an interface sharing connection between the first terminal and the second terminal.
  • the receiver 504 is further configured to: receive the updated prompt information sent by the second terminal, and the processor 501 is further configured to: use the updated prompt information to update the prompt information displayed on the scene image interface; The updated prompt information is obtained after modifying the prompt information displayed on the scene image interface.
  • the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal; the first terminal receives the prompt information sent by the second terminal; Display the prompt information on the scene image interface; the prompt information is used to prompt the first end The location of the target point to be searched for or the specific path to the target point is prompted, and the prompt information is determined by the second terminal according to the shared scene image interface and the target point.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby making the user of the first terminal simpler and more accurate. Determine the meaning of the prompt information, and then find the target point through the prompt information more quickly and conveniently.
  • FIG. 6 is a schematic structural diagram of a navigation assisting terminal based on scene sharing according to an embodiment of the present invention. Based on the same concept, the embodiment of the present invention provides a schematic diagram of a structure of a navigation assisting terminal based on scene sharing. As shown in FIG. 6, the navigation assisting terminal 600 based on scene sharing includes a processor 601, a transmitter 603, a receiver 604, and a memory. 602.
  • the processor 601 is configured to read a program in the memory, and execute the following process: receiving, by the receiver 604, a scene image interface of a scene currently occupied by the first terminal shared by the first terminal; determining, according to the shared scene image interface, And prompting the location of the target point to be searched by the first terminal or prompting the specific path of the target point; sending, by the transmitter 603, the prompt information to the first terminal, so that the first terminal displays the prompt information in the scene image. On the interface.
  • the receiver 604 is further configured to: receive the help information sent by the first terminal, where the help information includes information of a target point to be searched by the first terminal, and the processor 601, configured to: according to the shared scene image interface and the help The information of the target point in the information determines the prompt information for prompting the location of the target point to be searched by the first terminal or prompting the specific path to the target point.
  • the scene image interface includes: an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal; and/or a location including the current scene of the first terminal. GPS map interface.
  • the prompt information includes at least one of the following information: a location information for prompting the location of the target point or a specific path for prompting the target point; a location for prompting the target point or a specific path for prompting the target point Text information; used to prompt the location of the target point or the prompt to arrive Audio information of the specific path of the target point.
  • the transmitter 603 is further configured to: send the location of the marker information in the scene image interface to the first terminal. Information; wherein the location information is used to cause the first terminal to display the tag information at a corresponding position of the scene image interface displayed by the first terminal according to the received location information.
  • the scene image interface is a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal
  • the processor 601 is configured to: when receiving the first operation instruction, The video interface displayed by the second terminal is locked as a static picture; the received position for prompting the target point of the first terminal to be searched or the touch track of the specific path to the target point is displayed on the still picture; Set to mark information, and/or generate text information of a specific path according to the touch track, and restore the locked still picture to a video interface shared by the first terminal.
  • the first operation instruction is that the video interface displayed by the second terminal is double-clicked or clicked.
  • the receiver 604 is further configured to: acquire first mobile data that is moved by the camera device that is connected to the first terminal; and the processor 601 is further configured to: convert the first mobile data into the tag information to move The second mobile data is moved according to the converted second mobile data, and the marked information displayed on the scene image interface is matched to match the moved marker information with the scene image captured by the moved imaging device.
  • the processor 601 is configured to: display the marker information on the layer parallel to the display plane of the scene image interface.
  • the first movement data is data acquired by the first terminal through the acceleration sensor of the first terminal and the gyro sensor.
  • the tag information is any one or a combination of any of the following: curve tag information, line tag information, line tag information with an arrow, and closed pattern tag information.
  • the scene image interface includes an image interface or a video interface obtained by the camera device connected to the first terminal to capture a scene currently located by the first terminal, and a GPS map interface including a location of the scene where the first terminal is currently located.
  • the display interface of the scene image interface of the processor 601 includes a first area and a second area, where the first area is used to display an image capturing device connected to the first terminal to capture a scene currently occupied by the first terminal.
  • Image interface or video interface second The area is used to display or not display the GPS map interface including the location of the scene where the first terminal is currently located; or the first area is used to display a GPS map interface including the location of the scene where the first terminal is currently located, and the second area is used for The image interface or the video interface obtained by capturing the scene in which the first terminal is currently located is displayed or not displayed.
  • the processor 601 is further configured to: switch the content displayed by the first area and the second area when the displayed scene image interface is touched.
  • the receiver 604 is further configured to: receive a help request sent by the first terminal;
  • the transmitter 603 is further configured to send an accepting assistance response to the first terminal, where the accepting response is used to establish an interface sharing connection between the first terminal and the second terminal.
  • the processor 601 is further configured to: prompt the information to be modified, and obtain the updated prompt information;
  • the transmitter 603 is configured to send the updated prompt information to the first terminal, so that the first terminal updates the prompt information displayed on the scene image interface by using the updated prompt information.
  • the first terminal shares the scene image interface of the scene where the first terminal is currently located to the second terminal; the first terminal receives the prompt information sent by the second terminal; The prompt information is displayed on the scene image interface; the prompt information is used to prompt the location of the target point to be searched by the first terminal or the specific path to the target point, and the prompt information is the interface image of the second terminal according to the shared scene. And the target point is determined.
  • the user of the first terminal can more accurately describe the scene in which the user is located through the scene image interface, and then the second terminal according to the second terminal
  • the shared scene image interface and the received help information can more accurately determine the prompt information; further, the first terminal displays the prompt information on the scene image interface, thereby making the user of the first terminal simpler and more accurate. Determine the meaning of the prompt information, and then find the target point through the prompt information more quickly and conveniently.
  • embodiments of the present invention can be provided as a method, or a computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the present invention may employ computer-usable storage media (including but not limited to disk storage, in one or more of the computer-usable program code embodied therein. The form of a computer program product implemented on a CD-ROM, optical memory, or the like.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Instructional Devices (AREA)
  • Studio Devices (AREA)

Abstract

一种基于场景共享的导航协助方法及终端,用于通过简单的方式使求助者更加准确的描述自己所处的场景,从而使协助者给出更加准确的用于帮助求助者达到目标点的提示信息。第一终端将第一终端当前所处场景的场景图像界面共享给第二终端(201);第一终端接收第二终端发送的提示信息(202);第一终端将用于提示目标点的位置的提示信息显示在场景图像界面上(203)。如此,第一终端的用户可以通过场景图像界面更加准确的描述自己所处的场景,进而第二终端可更加准确的确定出提示信息。

Description

一种基于场景共享的导航协助方法及终端
本申请要求在2016年01月28日提交中国专利局、申请号为201610058850.6、发明名称为“一种基于场景共享的导航协助方法及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及通信领域,尤其涉及一种基于场景共享的导航协助方法及终端。
背景技术
全球定位系统(Global Positioning System,简称GPS)主要目的是为陆海空三大领域提供实时、全天候和全球性的导航服务,并用于情报收集、核爆监测和应急通讯等一些军事目的,经过20余年的研究实验,耗资300亿美元,到1994年,全球覆盖率高达98%的24颗GPS卫星星座己布设完成。现如今,GPS可以通过终端定位系统为用户提供车辆定位、防盗、反劫、行驶路线监控及呼叫指挥等功能。终端定位系统,是通过特定的定位技术来获取移动手机或终端寻路者的位置信息(经纬度坐标),在电子地图上标出被定位对象的位置的技术或服务。
求助者在想要达到一个目标点时,通常通过语言向协助者描述自己当前所处的场景,并希望协助者为自己指路。但是求助者通过语言无法有效且准确的向协助者描述自己当前所处的场景,从而导致协助者无法给出用于帮助求助者达到目标点的提示信息,或者导致协助者给出错误的提示信息。
综上,亟需一种基于场景共享的导航协助方法及终端,用于通过简单的方式使求助者更加准确的描述自己所处的场景,从而使协助者给出更加准确的用于帮助求助者达到目标点的提示信息。
发明内容
本发明实施例提供一种基于场景共享的导航协助方法及终端,用于通过简单的方式使求助者更加准确的描述自己所处的场景,从而使协助者给出更加准确的用于帮助求助者达到目标点的提示信息。
第一方面,本发明实施例提供一种基于场景共享的导航协助方法,包括:第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
可选地,第一终端接收第二终端发送的提示信息之前,还包括:第一终端向第二终端发送求助信息,求助信息包括第一终端待寻找的目标点的信息。具体来说,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端之后,第一终端向第二终端发送求助信息。或者第一终端向第二终端发送求助信息,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端。
另一种实施方式中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,第二终端依据自身需要向第一终端下发任务要求,即第二终端基于共享的第一终端当前所处的场景的场景图像界面,第二终端生成提示信息,并向第一终端发送提示信息。
可选地,场景图像界面包括:第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前 所处场景的位置的GPS地图界面。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达目标点的具体路径的音频信息。
可选地,在提示信息为用于提示目标点的位置或者提示到达目标点的具体路径的标记信息时,方法还包括:第一终端接收第二终端发送的标记信息在场景图像界面中的位置信息;第一终端将提示信息显示在场景图像界面上,包括:第一终端根据接收到的位置信息,在第一终端显示的场景图像界面的对应位置处显示标记信息。如此,一方面第二终端可以更加方便的在场景图像界面上添加提示信息,另一方面,第一终端的用户也可更加简单的明白第二终端所添加的提示信息的含义。
可选地,第一终端根据接收到的位置信息,在第一终端显示的场景图像界面的对应位置处显示标记信息之后,还包括:第一终端获取第一终端所连接的摄像设备进行移动的第一移动数据;第一终端将第一移动数据转换为标记信息进行移动的第二移动数据;第一终端根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。如此,则可保证提示信息的准确性,从而避免提示信息随着自己的移动而变得不准确。
可选地,第一终端在第一终端显示的场景图像界面的对应位置处显示标记信息,包括:第一终端在与场景图像界面的显示平面平行的图层上的对应位置处显示标记信息。
可选地,第一终端获取第一终端所连接的摄像设备进行移动的第一移动数据,包括:第一终端通过加速度传感器和陀螺仪传感器,获取第一终端所连接的摄像设备进行移动的第一移动数据。如此,可通过简单的方法实现提示信息随着第一终端的移动而移动的目的。
可选地,标记信息为以下标记中的任一项或任几项的组合:曲线标记信 息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,第一终端显示场景图像界面的显示界面上包括第一区域和第二区域;其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,还包括:第一终端在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端之前,还包括:第一终端向第二终端发送求助请求;第一终端接收第二终端返回的接受求助响应;其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
可选地,第一终端将提示信息显示在场景图像界面上之后,还包括:第一终端接收第二终端发送的更新后的提示信息;第一终端使用更新后的提示信息更新显示在场景图像界面上的提示信息;其中,更新后的提示信息为对显示在场景图像界面上的提示信息进行修改之后得到的。如此,第一终端可以持续受到提示信息,举个例子,第一终端当前路口处,第二终端发送提示信息让第一终端向左拐,在第一终端向左拐之后,又遇到一个路口,此时第二终端再向第一终端发送提示信息,向右走,此时第一终端再向右走,可见本发明实施例中第二终端可以多次的向第一终端发送提示信息,且也可实时更新提示信息,从而可为第一终端提供更加准确的提示信息。
第二方面,本发明实施例提供一种基于场景共享的导航协助方法,包括:第二终端接收第一终端共享的第一终端当前所处的场景的场景图像界面;第 二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息;第二终端向第一终端发送提示信息,以使第一终端将提示信息显示在场景图像界面上。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
可选地,第二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息之前,还包括:第二终端接收第一终端发送的求助信息,求助信息包括第一终端待寻找的目标点的信息;第二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息,具体包括:第二终端根据共享的场景图像界面以及求助信息中的目标点的信息,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息。
具体来说,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端之后,第一终端向第二终端发送求助信息。或者第一终端向第二终端发送求助信息,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端。另一种实施方式中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,第二终端依据自身需要向第一终端下发任务要求,即第二终端基于共享的第一终端当前所处的场景的场景图像界面,第二终端生成提示信息,并向第一终端发送提示信息。
可选地,场景图像界面包括:第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前所处场景的位置的GPS地图界面。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达目标点的具体路径的音频信息。
可选地,在提示信息为用于提示目标点的位置或者提示到达目标点的具体路径的标记信息时,方法还包括:第二终端向第一终端发送标记信息在场景图像界面中的位置信息;其中,位置信息用于使第一终端根据接收到的位置信息在第一终端显示的场景图像界面的对应位置处显示标记信息。如此,一方面第二终端可以更加方便的在场景图像界面上添加提示信息,另一方面,第一终端的用户也可更加简单的明白第二终端所添加的提示信息的含义。
可选地,场景图像界面为第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的视频界面;第二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息,包括:第二终端在接收到的第一操作指令时,将第二终端显示的视频界面锁定为静态图片;第二终端在静态图片上显示接收到的用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的触摸轨迹;第二终端将触摸轨迹设置为标记信息,和/或根据触摸轨迹生成具体路径的文本信息,并将锁定的静态图片恢复为第一终端共享的视频界面。
可选地,第一操作指令为第二终端显示的视频界面被双击或被单击。
可选地,第二终端向第一终端发送标记信息在场景图像界面中的位置信息之后,还包括:第二终端获取第一终端所连接的摄像设备进行移动的第一移动数据;第二终端将第一移动数据转换为标记信息进行移动的第二移动数据;第二终端根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。如此,则可保证提示信息的准确性,从而避免提示信息随着自己的移动而变得不准确。
可选地,第二终端确定出用于提示第一终端待寻找的目标点的位置或者 提示到达所述目标点的具体路径的提示信息之后,还包括:第二终端在与场景图像界面的显示平面平行的图层上的显示标记信息。可选地,第一移动数据为第一终端通过第一终端的加速度传感器和陀螺仪传感器获取的数据。如此,可通过简单的方法实现提示信息随着第一终端的移动而移动的目的。
可选地,标记信息为以下标记中的任一项或任几项的组合:曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,第二终端显示场景图像界面的显示界面上包括第一区域和第二区域;其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,还包括:第二终端在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,第二终端接收第一终端共享的第一终端当前所处的场景的场景图像界面之前,还包括:第二终端接收第一终端发送的求助请求;第二终端向第一终端发送接受求助响应;其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
可选地,第二终端向第一终端发送提示信息之后,还包括:第二终端对提示信息进行修改,得到更新后的提示信息;第二终端向第一终端发送更新后的提示信息,以使第一终端使用更新后的提示信息更新显示在场景图像界面上的提示信息。如此,第一终端可以持续受到提示信息,举个例子,第一终端当前路口处,第二终端发送提示信息让第一终端向左拐,在第一终端向左拐之后,又遇到一个路口,此时第二终端再向第一终端发送提示信息,向 右走,此时第一终端再向右走,可见本发明实施例中第二终端可以多次的向第一终端发送提示信息,且也可实时更新提示信息,从而可为第一终端提供更加准确的提示信息。
第三方面,本发明实施例提供一种基于场景共享的导航协助终端,用于实现上述第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。
第四方面,本发明实施例提供一种用于时域资源单元集合结构的终端用于实现上述第二方面中的任意一种的方法,包括相应的功能模块,分别用于实现以上方法中的步骤。
第五方面,本发明实施例提供一种基于场景共享的导航协助终端,所述终端包括发送器、接收器、存储器和处理器;所述存储器用于存储指令,所述处理器用于根据执行所述存储器存储的指令,并控制所述发送器和所述接收进行信号接收和信号发送,当所述处理器执行所述存储器存储的指令时,所述终端用于执行上述第一方面中的任意一种方法。
第六方面,本发明实施例提供一种基于场景共享的导航协助终端,所述终端包括发送器、接收器、存储器和处理器;所述存储器用于存储指令,所述处理器用于根据执行所述存储器存储的指令,并控制所述发送器和所述接收进行信号接收和信号发送,当所述处理器执行所述存储器存储的指令时,所述终端用于执行上述第二方面中的任意一种方法。
本发明实施例中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由 于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍。
图1为本发明实施例适用的一种系统架构示意图;
图2为本发明实施例提供的一种基于场景共享的导航协助方法流程示意图;
图2a为本发明实施例提供的一种场景图像界面的示意图;
图2b为本发明实施例提供的另一种场景图像界面的示意图;
图2c为本发明实施例提供的另一种场景图像界面的示意图;
图2d为本发明实施例提供的另一种场景图像界面的示意图;
图2e为本发明实施例提供的另一种场景图像界面的示意图;
图2f为本发明实施例提供的另一种场景图像界面的示意图;
图2g为本发明实施例中在场景图像界面的显示平面平行的图层上的显示标记信息的示意图;
图2h为本发明实施例中第一终端进行移动后显示界面的示意图;
图2i为本发明实施例中第一终端显示场景图像界面的显示界面的示意图;
图2j为本发明实施例中第二终端显示场景图像界面的显示界面的示意图;
图2k为本发明实施例中第二终端在图2j上生成提示信息之后的显示界面的示意图;
图2l为本发明实施例中第一终端接收提示信息之后的显示界面示意图;
图2m为本发明实施例中第一终端显示场景图像界面的显示界面的示意图;
图2n为本发明实施例中第二终端显示场景图像界面的显示界面的示意图;
图2o为本发明实施例中第二终端在图2n上生成提示信息之后的显示界面的示意图;
图2p为本发明实施例中第一终端接收提示信息之后的显示界面示意图;
图2q为本发明实施例提供的一种显示界面的示意图;
图2r为本发明实施例提供的另一种显示界面的示意图;
图2s为本发明实施例提供的一种显示界面的示意图;
图3为本发明实施例提供的一种基于场景共享的导航协助终端的结构示意图;
图4为本发明实施例提供的另一种基于场景共享的导航协助终端的结构示意图;
图5为本发明实施例提供的另一种基于场景共享的导航协助终端的结构示意图;
图6为本发明实施例提供的另一种基于场景共享的导航协助终端的结构示意图。
具体实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
图1示例性示出了本发明实施例适用的一种系统架构示意图,如图1所示,本发明实施例适用的系统架构包括第一终端101和第二终端102。第一终端101可以与第二终端102之间建立连接,一方面第一终端101可以将场景图像界面共享给第二终端102,第二终端102可以直接在共享的场景图像界面上生成提示信息,并将提示信息发送给第一终端101。可选地,第一终端101可以与第二终端102互相传输即时的文本信息和/或音频信息。
本发明实施例适用的场景包括多种,比如目标点为一个具体的地点,也可为一个目标物。比如商场A,此时第一终端将第一终端当前所处的场景的 场景图像界面共享给第二终端,第二终端即可以清楚准确的确定出第一终端当前所处的位置,进而为第一终端指示出准确的提示信息,进而使第一终端达到商场A。此时提示信息可提示到达目标点的具体路径。提示信息可以为标记信息,比如画一条带箭头的线,用于指出到达目标点的具体路径,又或者在到达目标点的具体路径上的某些物体,比如标志性建筑物上画个圈,用于提示到达目标点的具体路径。举个例子,场景图像界面上是一个十字路口,其中一个路径上有一个标志性建筑物A,此时,第二终端可以在标志性建筑物A上画个圈,在标志性建筑物A上画的圈即为标记信息,该标记信息可以告诉第一终端应该走标志性建筑物A所在的那条路径。
另一种应用场景下,比如目标点为一个维修员工的待维修设备,第一终端到达维修设备所处的地点时,发现有很多此类设备,此时,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,第二终端可以基于共享的场景图像界面给为第一终端准确指示出哪一台设备是待维修设备,也就是说,第二终端向第一终端发送的提示信息可以提示目标点的位置,提示信息可以为提示目标点的位置的标记信息,比如在目标点上画个圈;或者提示信息为文本信息,比如这样一段文字“场景图像界面上的第二个设备即为目标点”;或者提示信息为音频信息,比如这样一段音频“场景图像界面上的第二个设备即为目标点”。
再比如,另一种应用场景中,第二终端向第一终端发布任务,比如第一终端到达一个可以攻击目标点的位置之后,第二终端通过本发明实施例所提供的方法向第一终端发送提示信息,提示信息可以清楚的为第一终端指示出哪一个目标点才是第一终端的待攻击的目标点。
本发明实施例中的第一终端和第二终端中的“第一”和“第二”仅仅用于区别不同的终端,并不造成限制。本发明实施例中的第一终端可为终端中的任一个终端,第二终端可为终端中的任一个终端。本发明所涉及到的终端终端为向用户提供语音和/或数据连通性的设备(device),包括无线终端或有线终端。无线终端可以是具有无线连接功能的手持式设备、或连接到无线调制 解调器的其他处理设备,经无线接入网与一个或多个核心网进行通信的移动终端。例如,无线终端可以是移动电话(或称为“蜂窝”电话)和具有移动终端的计算机。又如,无线终端也可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置。再如,无线终端可以为移动站(mobile station)、接入点(access point)、或用户设备(user equipment,简称UE)的一部分。
图2示例性示出了本发明实施例提供的一种基于场景共享的导航协助方法流程示意图。
基于图1所示的系统架构,如图2所示,本发明实施例提供的一种基于场景共享的导航协助方法,包括以下步骤:
步骤201,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;
步骤202,第一终端接收第二终端发送的提示信息;
步骤203,第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。
具体来说,提示信息可提示到达目标点的具体路径。提示信息可以为标记信息,比如画一条带箭头的线,用于指出到达目标点的具体路径,又或者在到达目标点的具体路径上的某些物体,比如标志性建筑物上画个圈,用于提示到达目标点的具体路径。提示信息也可以提示目标点的位置,提示信息可以为提示目标点的位置的标记信息,比如在目标点上画个圈;或者提示信息为文本信息,比如这样一段文字“场景图像界面上的第二个设备即为目标点”;或者提示信息为音频信息,比如这样一段音频“场景图像界面上的第二个设备即为目标点”。
图2a示例性示出了本发明实施例提供的一种场景图像界面的示意图;图2b示例性示出了本发明实施例提供的另一种场景图像界面的示意图。如图2a所示,提示信息为指出到达目标点的具体路径的信息,比如标记信息2101和 文字信息2102。如图2b所示,提示信息为提示到达目标点的具体路径,比如在图2d中,在建筑物2103上画个圈2104,也可表示出到达目标点的具体路径,即图2d中在建筑物2103上画个圈2104即表示出到达目标点的具体路径为建筑物2103所属的那条路。
可选地,在上述步骤202之前,第一终端向第二终端发送求助信息,求助信息包括第一终端待寻找的目标点的信息。具体来说,目标点的信息可以为目标点的标识,比如商厦A等等。具体来说,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端之后,第一终端向第二终端发送求助信息。或者第一终端向第二终端发送求助信息,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端。
另一种实施方式中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,第二终端依据自身需要向第一终端下发任务要求,即第二终端基于共享的第一终端当前所处的场景的场景图像界面,第二终端生成提示信息,并向第一终端发送提示信息。
相应地,本发明实施例中,第二终端接收第一终端共享的第一终端当前所处的场景的场景图像界面;第二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息;第二终端向第一终端发送提示信息,以使第一终端将提示信息显示在场景图像界面上。
可选地,第二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息之前,第二终端接收第一终端发送的求助信息,求助信息包括第一终端待寻找的目标点的信息;第二终端根据共享的场景图像界面以及求助信息中的目标点的信息,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息。
另一种实现方式为,第二终端基于共享的场景图像界面,第二终端自己确定出目标点,并向第一终端发送提示信息,比如第二终端根据当前的实际 情况向第一终端下发维修目标点的任务。
本发明实施例中,由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
可选地,本发明实施例中第二终端根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息,有多种实现方式,比如第二终端根据共享的场景图像界面去数据库中匹配出一个预设的图像界面,并根据匹配出的预设的图像界面确定出更加准确的第一终端当前所处的位置,第二终端可从数据库中进一步匹配出目标点和匹配出一个预设的图像界面的地点之间的提示信息,并将提示信息发送给第一终端。
另一种可选地的实施方式中,第二终端的用户根据共享的场景图像界面,确定了第一终端的用户当前所处的位置,之后根据目标点,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息。如此,可依赖于人力的力量,进一步提高提示信息的准确性,而并非现有技术中仅仅依靠本地软件系统的力量。举个例子,现有技术中若目标点没有在本地软件系统中存储,则仅仅依靠本地软件系统的力量时无法为第一终端生成提示信息。又或者目标点为商场内部的某一个商铺时,本地软件系统通常无法体现商场内部的细节,因此无法为用户指示出如何达到商场内部的某一个商铺的提示信息。再比如,当第一终端达到目标点需要经过一个高架桥时,现有技术中本地软件系统中通常也不能清楚的指示应该经过高架桥的第几层。而本发明实施例所提供的方法可以在本地软件系统无法支持的时候,依赖人力为用户提供更加准确的提示信息。
可选地,本发明实施例中,第二终端可以实时的对已经生成的提示信息进行修改,以便更新提示信息,也可以新增加一些提示信息。具体来说,第一终端接收第二终端发送的更新后的提示信息;第一终端使用更新后的提示信息更新显示在场景图像界面上的提示信息;其中,更新后的提示信息为对显示在场景图像界面上的提示信息进行修改之后得到的。
也就是说,第二终端向第一终端发送提示信息之后,第二终端对提示信息进行修改,得到更新后的提示信息;第二终端向第一终端发送更新后的提示信息,以使第一终端使用更新后的提示信息更新显示在场景图像界面上的提示信息。
如此,则可实现提示信息的实时更新,进而使第一终端更加准确且及时的寻找到待寻找的目标点。
可选地,上述步骤201之前,第一终端向第二终端发送求助请求;第一终端接收第二终端返回的接受求助响应。也就是说,第二终端接收第一终端发送的求助请求;第二终端向第一终端发送接受求助响应。其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
举个例子,第一终端向第二终端发起求助请求,第二终端的显示界面上显示第一终端发起的求助请求,且第二终端上还现有有接受和拒绝两个按键,第二终端的用户如果选择了接收按键,则第二终端向第一终端发送接受求助响应,此时第一终端与第二终端建立连接。
可选地,第一终端与第二终端建立连接之后,第一终端可以向第二终端发送第一终端当前所处的场景的场景图像界面,第一终端也可向第二终端发送一些文本信息,第一终端也可向第二终端发送一些音频信息,比如第一终端向第二终端发送一段音频信息为“我要去商场A,帮我指一下路吧?”,或者发一个文本信息“左拐还是右拐?”、“左边还是右边?”。
可选地,场景图像界面包括:第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前所处场景的位置的GPS地图界面。
举例来说,第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄,可以进行图像拍摄,也可以进行视频拍摄,如果是图像拍摄,可以将拍摄的图像界面共享给第二终端,如果是视频拍摄,可以将拍摄的视频界面共享给第二终端,比如第一终端与第二终端之间发起视频通话,第一终端和第二终端的界面上均显示第一终端当前所处的场景的场景图像界面。图2c示例性示出了本发明实施例提供的一种场景图像界面的示意图,第一终端将第一终端摄像头当前拍摄到的图像界面或视频界面显示在第一终端的显示界面上,第一终端的显示界面所显示的内容如图2c所示,第一终端将该第一终端摄像头当前拍摄到的图像界面或视频界面共享给第二终端,第二终端的显示界面所显示的内容与图2c所示的内容一致。
再举个例子,比如用户将自己当前的位置在GPS地图上标识出来,当前第一终端的显示界面上显示的是包含第一终端当前所处场景的位置的GPS地图界面,此时由于第一终端将场景图像界面共享给第二终端,因此第二终端当前的显示界面也是包含第一终端当前所处场景的位置的GPS地图界面。图2d示例性示出了本发明实施例提供的另一种场景图像界面的示意图,第一终端将包含第一终端当前所处场景的位置的GPS地图界面显示在第一终端的显示界面上,第一终端的显示界面所显示的内容如图2d所示,第一终端将该第一终端摄像头当前拍摄到的图像界面或视频界面共享给第二终端,第二终端的显示界面所显示的内容与图2d所示的内容一致。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达目标点的具体路径的音频信息。
可选地,标记信息可以用于提示目标点的位置,比如在目标点上画个圈,或者提示到达目标点的具体路径,比如在到达目标点的具体路径上画个带箭头的线,或者在到达目标点的具体路径上的标志性建筑物上画个圈。类似地,文本信息或音频信息也可以用于提示目标点的位置,或者提示到达目标点的 具体路径。
图2e示例性示出了本发明实施例提供的另一种场景图像界面的示意图,场景图像界面为第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二终端在第一终端共享的场景图像界面2301上添加提示信息,第二终端在第二终端的显示界面上添加提示信息之后的场景图像界面的与图2e所示的内容一致,如图2e所示,提示信息包括用于提示到达目标点的具体路径的标记信息2302和用于提示到达目标点的具体路径的文本信息2303,以及用于提示到达目标点的具体路径的文本信息2304,本发明实施例中标记信息为在共享的场景图像界面上通过触摸操作产生的标记信息,比如标记信息2302为一个带箭头的曲线。用于提示到达目标点的具体路径的文本信息2303可为一个在场景图像界面上通过触摸写出的左字。用于提示到达目标点的具体路径的文本信息2304在图2e中为“向左走”。该“向左走”也可为音频信息。也就是说,本发明实施例中用于提示到达目标点的具体路径的文本信息2303可为根据触摸轨迹生成具体路径的文本信息,也可为一个常规的文本信息,比如在即时消息框中写入的“向左走”。可选地,文本信息2304位于一个即时消息对话框中,该对话框可用于传输第一终端与第二终端之间进行沟通的文本信息和音频信息。
图2f示例性示出了本发明实施例提供的另一种场景图像界面的示意图,如图2f所示,场景图像界面为包含第一终端当前所处场景的位置的GPS地图界面。第二终端在第一终端共享的场景图像界面2401上添加提示信息,第二终端在第二终端的显示界面上添加提示信息之后的场景图像界面的与图2f所示的内容一致,如图2f所示,提示信息包括用于提示到达目标点的具体路径的标记信息2402,本发明实施例中标记信息为在共享的场景图像界面上通过触摸操作产生的标记信息,比如标记信息2402为一个带箭头的曲线。用于提示到达目标点的具体路径的文本信息2403在图2f中为“向左走”。该“向左走”也可为音频信息。可选地,用于提示到达目标点的具体路径的文本信息2403位于一个即时消息对话框中,该对话框可用于传输第一终端与第二终端 之间进行沟通的文本信息和音频信息。
可选地,本发明实施例中,当场景图像界面为第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的视频界面时,一种实现方式中,第二终端可在动态的视频界面上添加提示信息。
另一种可选地实施方式为,当场景图像界面为第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的视频界面时,第二终端在接收到的第一操作指令时,将第二终端显示的视频界面锁定为静态图片;第二终端在静态图片上显示接收到的用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的触摸轨迹;第二终端将触摸轨迹设置为标记信息,和/或根据触摸轨迹生成具体路径的文本信息,并将锁定的静态图片恢复为第一终端共享的视频界面。可选地,第一操作指令为第二终端显示的视频界面被双击或被单击。第二终端通过视频界面被双击或被单击在静态图片或视频界面之间进行切换,第二终端在将视频界面锁定为静态图片时,可以更加清晰方便的在第二终端的显示界面上生成提示信息。
可选地,本发明实施例中的提示信息标记信息时,标记信息为以下标记中的任一项或任几项的组合:曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
举个例子,在场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面时,在图像界面或视频界面上添加的标记信息为一个圈时,即封闭图形标记信息,则圈内的区域可表示目标点,曲线标记信息、直线标记信息和带箭头的线标记信息均可具体表示路径。在显示界面上某个位置单击也可表示该位置为目标点的位置或者为目标点所在位置的方向。在显示界面上某个位置双击可以使视频界面锁定为静态图片,或者从锁定的静态图片中恢复为视频界面。
举个例子,在场景图像界面包括包含第一终端当前所处场景的位置的GPS地图界面时,在GPS地图界面上添加的标记信息为一个圈时,即封闭图形标记信息,则圈内的区域可表示目标点,曲线标记信息、直线标记信息和 带箭头的线标记信息均可具体表示路径。在显示界面上某个位置单击可表示该位置为目标点的位置。在GPS地图界面上双击可计算从第一终端在GPS地图界面上当前所处场景的位置处,至目标点处之间的路径长度。在GPS地图界面上通过双指在界面上的开合触摸手势,可实现地图的缩放和移动。
可选地,在提示信息为用于提示到达目标点的具体路径的标记信息时,第一终端接收第二终端发送的标记信息在场景图像界面中的位置信息;第一终端根据接收到的位置信息,在第一终端显示的场景图像界面的对应位置处显示标记信息。也就是说,第二终端向第一终端发送标记信息在场景图像界面中的位置信息;其中,位置信息用于使第一终端根据接收到的位置信息在第一终端显示的场景图像界面的对应位置处显示标记信息。
也就是说,在提示信息为用于提示到达目标点的具体路径的标记信息时,标记信息要随着第一终端的位置的移动而移动,举个例子,比如第二终端在第二终端的显示界面上生成了标记信息,比如图2e所示的标记信息2302,可看出,标记信息2302为一个带箭头的曲线,且标记信息2302位于最左边的道路上,此时,第一终端接收到第二终端发送的提示信息,第一终端还需接收第二终端发送的标记信息在场景图像界面中的位置信息,之后第一终端根据接收到的位置信息,在第一终端显示的场景图像界面的对应位置处显示标记信息,也就是说,第一终端最终需要在第一终端的显示界面上将执行标记信息2302也显示在最左边的道路上,如此,第一终端的用户可以很直观的明白自己应该怎么行进。
可选地,用于提示到达目标点的具体路径的文本信息2303为根据触摸轨迹生成的,第一终端也接收根据触摸轨迹生成的用于提示到达目标点的具体路径的文本信息的在场景图像界面中的位置信息;第一终端根据接收到的位置信息,在第一终端显示的场景图像界面的对应位置处显示根据触摸轨迹生成的用于提示到达目标点的具体路径的文本信息。也就是说,第一终端也会将用于提示到达目标点的具体路径的文本信息2303的“左”显示在第一终端的显示界面的最左边的道路上。
优选地,本发明实施例中,当第二终端在共享的场景图像界面上生成提示信息时,第二终端的显示界面与图2e所示的内容一致,此时,第一终端接收到提示信息之后,在第一终端的显示界面上显示的图像也与图2e所示的内容一致。
本发明实施例中,当第二终端生成提示信息并发送给第一终端之后,第一终端通常会根据该提示信息进行移动,可选地,提示信息也需要随着第一终端的移动进行移动。举个例子,比如共享的场景图像界面为第一终端所连接的摄像设备所拍摄的视频界面,提示信息为位于建筑物A上的一个圆圈,此时,当第一终端进行移动时,第一终端所连接的摄像设备所拍摄的视频界面也会随之改变,此时该建筑物A在摄像设备所拍摄的视频界面中也会随着移动,可选地,本发明实施例中位于建筑物A上的一个圆圈也会移动,并且该圆圈始终位于建筑物A上。
本发明实施例中,提示信息随着第一终端的移动而移动,具体有多种实现方式,比如图像目标跟踪算法,从图像识别的角度捕捉到位于建筑物A上的提示信息,之后在识别的图像中使提示信息始终位于建筑物A上。
另一种可选地实现方式中,第一终端获取第一终端所连接的摄像设备进行移动的第一移动数据;第一终端将第一移动数据转换为标记信息进行移动的第二移动数据;第一终端根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。可选地,第一终端在与场景图像界面的显示平面平行的图层上的对应位置处显示标记信息。可选地,第一终端通过加速度传感器和陀螺仪传感器,获取第一终端所连接的摄像设备进行移动的第一移动数据。
也就是说,第二终端获取第一终端所连接的摄像设备进行移动的第一移动数据;第二终端将第一移动数据转换为标记信息进行移动的第二移动数据;第二终端根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。可选地,第二终端在与场景图像界面的显示平面平行的图层上的显示标 记信息。可选地,第一移动数据为第一终端通过第一终端的加速度传感器和陀螺仪传感器获取的数据。
图2g示例性示出了本发明实施例中在场景图像界面的显示平面平行的图层上的显示标记信息的示意图,如图2g所示,运用开放图形语言(Open Graphics Library,简称OpenGL)技术,在与场景图像界面2301平行的位置创建图层2501,在图层2501上显示标记信息2302。可选地,也在图层2501上显示通过根据触摸轨迹生成的文本信息2303。
可选地,第一终端通过加速度传感器和陀螺仪传感器,获取第一终端所连接的摄像设备进行移动的第一移动数据,第一终端将第一移动数据转换为标记信息进行移动的第二移动数据,具体来说,通过加速度传感器检测到第一终端沿着X轴进行移动时,标记信息在共享的场景图像界面上需要进行左右移动,通过加速度传感器检测到第一终端沿着Y轴进行移动时,标记信息在共享的场景图像界面上需要进行上下移动,通过加速度传感器检测到第一终端沿着Z轴进行移动时,标记信息在共享的场景图像界面上需要进行前后移动。通过陀螺仪传感器检测到第一终端沿着X轴进行旋转时,标记信息在共享的场景图像界面上需要沿着X轴进行旋转,通过陀螺仪传感器检测到第一终端沿着Y轴进行旋转时,标记信息在共享的场景图像界面上需要沿着Y轴进行旋转,通过陀螺仪传感器检测到第一终端沿着Z轴旋转时,标记信息在共享的场景图像界面上需要沿着Z轴进行旋转。
可选地,由于陀螺仪传感器和加速度传感器在计算位移和旋转角度时,都会产生累积误差,所以在计算第二移动数据时,需要先对陀螺仪传感器和加速度传感器收集到的第一移动数据进行低通滤波。
图2h示例性示出了本发明实施例中第一终端进行移动后显示界面的示意图。如图2h所示,当第一终端沿着标记信息2302向前移动之后,在图2h中,第一终端的显示界面上标记信息2302已经向下移动,此时第一终端的显示界面和第二终端的显示界面所显示的内容均与图2h所显示的内容一致。如此,可以更加清楚准确的使第一终端的用户达到目标点。
基于上述内容,为了更清楚介绍本发明实施例,下面结合图2i、图2j、图2k和图2l进行介绍。图2i示例性示出了本发明实施例中第一终端显示场景图像界面的显示界面的示意图;图2j示例性示出了本发明实施例中第二终端显示场景图像界面的显示界面的示意图;图2k示例性示出了本发明实施例中第二终端在图2j上生成提示信息之后的显示界面的示意图;图2l示例性示出了本发明实施例中第一终端接收提示信息之后的显示界面示意图。如图2i所示,第一终端的显示界面显示第一终端当前所处的场景的场景图像界面,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端之后,第二终端的显示界面如图2j所示。第二终端在图2j所示的场景图像界面上显示提示信息之后的显示界面示意图如图2k所示,第二终端将提示信息发送给第一终端,第一终端将提示信息显示在场景图像界面上之后第一终端的显示界面的示意图如图2l所示。
基于上述内容,为了更清楚介绍本发明实施例,下面结合图2m、图2n、图2o和图2p进行介绍。图2m示例性示出了本发明实施例中第一终端显示场景图像界面的显示界面的示意图;图2n示例性示出了本发明实施例中第二终端显示场景图像界面的显示界面的示意图;图2o示例性示出了本发明实施例中第二终端在图2n上生成提示信息之后的显示界面的示意图;图2p示例性示出了本发明实施例中第一终端接收提示信息之后的显示界面示意图。如图2m所示,第一终端的显示界面显示第一终端当前所处的场景的场景图像界面,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端之后,第二终端的显示界面如图2n所示。第二终端在图2n所示的场景图像界面上显示提示信息之后的显示界面示意图如图2o所示,第二终端将提示信息发送给第一终端,第一终端将提示信息显示在场景图像界面上之后第一终端的显示界面的示意图如图2p所示。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,第一终端显示场景图像界面的显示界面上包 括第一区域和第二区域;
其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者
第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,第一终端在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,第二终端显示场景图像界面的显示界面上包括第一区域和第二区域;
其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者
第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,第二终端在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
具体来说,本发明实施中,第一终端和/或第二终端的第二区域不显示内容可包括多种实现形式,比如第二区域为一个按钮,或者第二区域为一个特殊的区域。
本发明实施例中第一终端和/或第二终端的显示界面上也可以不包括第二区域,即仅仅显示第一区域,第二区域不显示任何内容。当单击或双击第一区域时,即实现了第一区域显示的内容的切换。举例来说,第一终端的显示 界面上仅仅显示第一区域,第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,此时,双击第一区域,第一区域所显示的内容即切换为包含第一终端当前所处场景的位置的GPS地图界面。
下面以第一终端的显示界面为例进行介绍,图2q示例性示出了本发明实施例提供的一种显示界面的示意图,图2r示例性示出了本发明实施例提供的另一种显示界面的示意图。如图2q所示,第一终端的显示界面上包括第一区域21501和第二区域21502,如图2q所示,第一终端的第一区域显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第一终端的第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面。通过在第一终端的显示界面上进行触摸,实现第一区域和第二区域所显示内容的切换,切换之后第一终端的显示界面如图2r所示,第一终端的第二区域显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第一终端的第一区域用于显示包含第一终端当前所处场景的位置的GPS地图界面。
图2s示例性示出了本发明实施例提供的一种显示界面的示意图,如图2s所示,第一终端或第二终端的显示界面上同时可显示第一终端所连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,以及包含第一终端当前所处场景的位置的GPS地图界面,以及一个即时对话框,即时对话框用于于第二终端进行实时对话。
可选地,本发明实施例中第一终端从与第二终端建立连接开始,至第二终端导航结束,该过程中第一终端可多次接收第二终端发送的提示信息,第二终端也可随时对提示信息进行修改。
从上述内容可看出,本发明实施例中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第 一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
图3示例性示出了本发明实施例提供的一种基于场景共享的导航协助终端的结构示意图。基于相同构思,本发明实施例提供一种基于场景共享的导航协助终端的结构示意图,如图3所示,基于场景共享的导航协助终端300包括发送单元301、处理单元302和接收单元303。发送单元301,用于将第一终端当前所处的场景的场景图像界面共享给第二终端;接收单元303,用于接收第二终端发送的提示信息;处理单元302,用于将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。
可选地,发送单元301,还用于在接收第二终端发送的提示信息之前,向第二终端发送求助信息,求助信息包括第一终端待寻找的目标点的信息。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前所处场景的位置的GPS地图界面。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达目标点的具体路径的音频信息。
可选地,在提示信息为用于提示目标点的位置或者提示到达目标点的具体路径的标记信息时,接收单元303还用于接收第二终端发送的标记信息在场景图像界面中的位置信息;处理单元302,用于根据接收到的位置信息,在 第一终端显示的场景图像界面的对应位置处显示标记信息。
可选地,处理单元302,还用于获取第一终端所连接的摄像设备进行移动的第一移动数据;将第一移动数据转换为标记信息进行移动的第二移动数据;根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。
可选地,处理单元302,用于第一终端在与场景图像界面的显示平面平行的图层上的对应位置处显示标记信息。
可选地,处理单元302,用于第一终端通过加速度传感器和陀螺仪传感器,获取第一终端所连接的摄像设备进行移动的第一移动数据。
可选地,标记信息为以下标记中的任一项或任几项的组合:曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,处理单元302显示场景图像界面的显示界面上包括第一区域和第二区域;其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,处理单元302,还用于:在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,发送单元301,还用于:向第二终端发送求助请求;接收单元303,还用于:接收第二终端返回的接受求助响应;其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
可选地,接收单元303,还用于:接收第二终端发送的更新后的提示信息;
处理单元302,还用于:使用更新后的提示信息更新显示在场景图像界面 上的提示信息;其中,更新后的提示信息为对显示在场景图像界面上的提示信息进行修改之后得到的。
从上述内容可看出,本发明实施例中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
图4示例性示出了本发明实施例提供的一种基于场景共享的导航协助终端的结构示意图。基于相同构思,本发明实施例提供一种基于场景共享的导航协助终端的结构示意图,如图4所示,基于场景共享的导航协助终端400包括发送单元401、处理单元402和接收单元403。接收单元403,用于接收第一终端共享的第一终端当前所处的场景的场景图像界面;处理单元402,用于根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息;发送单元401,用于向第一终端发送提示信息,以使第一终端将提示信息显示在场景图像界面上。
可选地,接收单元403,还用于接收第一终端发送的求助信息,求助信息包括第一终端待寻找的目标点的信息;处理单元402,用于:根据共享的场景图像界面以及求助信息中的目标点的信息,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息。
可选地,场景图像界面包括:第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前 所处场景的位置的GPS地图界面。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达目标点的具体路径的音频信息。
可选地,在提示信息为用于提示目标点的位置或者提示到达目标点的具体路径的标记信息时,发送单元401,还用于:向第一终端发送标记信息在场景图像界面中的位置信息;其中,位置信息用于使第一终端根据接收到的位置信息在第一终端显示的场景图像界面的对应位置处显示标记信息。
可选地,场景图像界面为第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的视频界面;处理单元402,用于:在接收到的第一操作指令时,将第二终端显示的视频界面锁定为静态图片;在静态图片上显示接收到的用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的触摸轨迹;将触摸轨迹设置为标记信息,和/或根据触摸轨迹生成具体路径的文本信息,并将锁定的静态图片恢复为第一终端共享的视频界面。
可选地,第一操作指令为第二终端显示的视频界面被双击或被单击。
可选地,接收单元403,还用于:获取第一终端所连接的摄像设备进行移动的第一移动数据;处理单元402,还用于:将第一移动数据转换为标记信息进行移动的第二移动数据;根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。
可选地,处理单元402,用于:在与场景图像界面的显示平面平行的图层上的显示标记信息。可选地,第一移动数据为第一终端通过第一终端的加速度传感器和陀螺仪传感器获取的数据。可选地,标记信息为以下标记中的任一项或任几项的组合:曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,处理单元402显示场景图像界面的显示界面上包括第一区域和第二区域;其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,处理单元402,还用于:在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,接收单元403,还用于:接收第一终端发送的求助请求;发送单元401,还用于:向第一终端发送接受求助响应;其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
可选地,处理单元402,还用于:提示信息进行修改,得到更新后的提示信息;发送单元401,用于:向第一终端发送更新后的提示信息,以使第一终端使用更新后的提示信息更新显示在场景图像界面上的提示信息。
从上述内容可看出,本发明实施例中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速 且便捷的通过提示信息寻找到目标点。
图5示例性示出了本发明实施例提供的一种基于场景共享的导航协助终端的结构示意图。基于相同构思,本发明实施例提供一种基于场景共享的导航协助终端的结构示意图,如图5所示,基于场景共享的导航协助终端500包括处理器501、发送器503、接收器504和存储器502。处理器501,用于读取存储器中的程序,执行下列过程:通过发送器503将第一终端当前所处的场景的场景图像界面共享给第二终端;通过接收器504接收第二终端发送的提示信息;将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。
可选地,发送器503,还用于:在接收第二终端发送的提示信息之前,向第二终端发送求助信息,求助信息包括第一终端待寻找的目标点的信息。
可选地,场景图像界面包括:第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前所处场景的位置的GPS地图界面。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达目标点的具体路径的音频信息。
可选地,在提示信息为用于提示目标点的位置或者提示到达目标点的具体路径的标记信息时,接收器504还用于:接收第二终端发送的标记信息在场景图像界面中的位置信息;处理器501,用于:根据接收到的位置信息,在第一终端显示的场景图像界面的对应位置处显示标记信息。
可选地,处理器501,还用于:获取第一终端所连接的摄像设备进行移动的第一移动数据;将第一移动数据转换为标记信息进行移动的第二移动数据;根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。
可选地,处理器501,用于:第一终端在与场景图像界面的显示平面平行的图层上的对应位置处显示标记信息。
可选地,处理器501,用于:第一终端通过加速度传感器和陀螺仪传感器,获取第一终端所连接的摄像设备进行移动的第一移动数据。
可选地,标记信息为以下标记中的任一项或任几项的组合:曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,处理器501显示场景图像界面的显示界面上包括第一区域和第二区域;其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,处理器501,还用于:在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,发送器503,还用于:向第二终端发送求助请求;
接收器504,还用于:接收第二终端返回的接受求助响应;其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
可选地,接收器504,还用于:接收第二终端发送的更新后的提示信息;处理器501,还用于:使用更新后的提示信息更新显示在场景图像界面上的提示信息;其中,更新后的提示信息为对显示在场景图像界面上的提示信息进行修改之后得到的。
从上述内容可看出,本发明实施例中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终 端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
图6示例性示出了本发明实施例提供的一种基于场景共享的导航协助终端的结构示意图。基于相同构思,本发明实施例提供一种基于场景共享的导航协助终端的结构示意图,如图6所示,基于场景共享的导航协助终端600包括处理器601、发送器603、接收器604和存储器602。处理器601,用于读取存储器中的程序,执行下列过程:通过接收器604接收第一终端共享的第一终端当前所处的场景的场景图像界面;根据共享的场景图像界面,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息;通过发送器603向第一终端发送提示信息,以使第一终端将提示信息显示在场景图像界面上。
可选地,接收器604,还用于:接收第一终端发送的求助信息,求助信息包括第一终端待寻找的目标点的信息;处理器601,用于:根据共享的场景图像界面以及求助信息中的目标点的信息,确定出用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的提示信息。
可选地,场景图像界面包括:第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或包含第一终端当前所处场景的位置的GPS地图界面。
可选地,提示信息包括下述信息中的至少一种:用于提示目标点的位置或者提示到达目标点的具体路径的标记信息;用于提示目标点的位置或者提示到达目标点的具体路径的文本信息;用于提示目标点的位置或者提示到达 目标点的具体路径的音频信息。
可选地,在提示信息为用于提示目标点的位置或者提示到达目标点的具体路径的标记信息时,发送器603,还用于:向第一终端发送标记信息在场景图像界面中的位置信息;其中,位置信息用于使第一终端根据接收到的位置信息在第一终端显示的场景图像界面的对应位置处显示标记信息。
可选地,场景图像界面为第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的视频界面;处理器601,用于:在接收到的第一操作指令时,将第二终端显示的视频界面锁定为静态图片;在静态图片上显示接收到的用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径的触摸轨迹;将触摸轨迹设置为标记信息,和/或根据触摸轨迹生成具体路径的文本信息,并将锁定的静态图片恢复为第一终端共享的视频界面。
可选地,第一操作指令为第二终端显示的视频界面被双击或被单击。
可选地,接收器604,还用于:获取第一终端所连接的摄像设备进行移动的第一移动数据;处理器601,还用于:将第一移动数据转换为标记信息进行移动的第二移动数据;根据转换后的第二移动数据对场景图像界面上显示的标记信息进行移动,以使移动后的标记信息与移动后的摄像设备所拍摄的场景图像界面匹配。
可选地,处理器601,用于:在与场景图像界面的显示平面平行的图层上的显示标记信息。可选地,第一移动数据为第一终端通过第一终端的加速度传感器和陀螺仪传感器获取的数据。可选地,标记信息为以下标记中的任一项或任几项的组合:曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
可选地,场景图像界面包括第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含第一终端当前所处场景的位置的GPS地图界面时,处理器601显示场景图像界面的显示界面上包括第一区域和第二区域;其中,第一区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,第二 区域用于显示包含第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者第一区域用于显示包括第一终端当前所处场景的位置的GPS地图界面,第二区域用于显示第一终端连接的摄像设备对第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
可选地,处理器601,还用于:在显示的场景图像界面被触摸时,对第一区域与第二区域所显示的内容进行切换。
可选地,接收器604,还用于:接收第一终端发送的求助请求;
发送器603,还用于向第一终端发送接受求助响应其中,接受求助响应用于使第一终端与第二终端之间建立界面共享连接。
可选地,处理器601,还用于提示信息进行修改,得到更新后的提示信息;
发送器603,用于向第一终端发送更新后的提示信息,以使第一终端使用更新后的提示信息更新显示在场景图像界面上的提示信息。
从上述内容可看出,本发明实施例中,第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端;第一终端接收第二终端发送的提示信息;第一终端将提示信息显示在场景图像界面上;提示信息用于提示第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,提示信息是所述第二终端根据共享的场景图像界面和目标点确定的。由于第一终端将第一终端当前所处的场景的场景图像界面共享给第二终端,因此第一终端的用户可以通过场景图像界面通过更加准确的描述自己所处的场景,进而第二终端根据共享的场景图像界面,以及接收到的求助信息,可更加准确的确定出提示信息;进一步由于第一终端将提示信息显示在场景图像界面上,进而可使第一终端的用户更加简单且准确的确定出提示信息的含义,进而更加快速且便捷的通过提示信息寻找到目标点。
本领域内的技术人员应明白,本发明的实施例可提供为方法、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、 CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (30)

  1. 一种基于场景共享的导航协助方法,其特征在于,包括:
    第一终端将所述第一终端当前所处的场景的场景图像界面共享给第二终端;
    所述第一终端接收所述第二终端发送的提示信息;
    所述第一终端将所述提示信息显示在所述场景图像界面上;
    所述提示信息用于提示所述第一终端待寻找的目标点的位置或者提示到达所述目标点的具体路径,所述提示信息是所述第二终端根据共享的所述场景图像界面和所述目标点确定的。
  2. 如权利要求1所述的方法,其特征在于,所述第一终端接收所述第二终端发送的提示信息之前,还包括:
    第一终端向第二终端发送求助信息,所述求助信息包括所述第一终端待寻找的所述目标点的信息。
  3. 如权利要求1或2所述的方法,其特征在于,所述场景图像界面包括:
    所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或
    包含所述第一终端当前所处场景的位置的GPS地图界面。
  4. 如权利要求1至3任一权利要求所述的方法,其特征在于,所述提示信息包括下述信息中的至少一种:
    用于提示所述目标点的位置或者提示到达所述目标点的具体路径的标记信息;
    用于提示所述目标点的位置或者提示到达所述目标点的具体路径的文本信息;
    用于提示所述目标点的位置或者提示到达所述目标点的具体路径的音频信息。
  5. 如权利要求4所述的方法,其特征在于,在所述提示信息为用于提示 所述目标点的位置或者提示到达所述目标点的具体路径的标记信息时,所述方法还包括:
    所述第一终端接收所述第二终端发送的所述标记信息在所述场景图像界面中的位置信息;
    所述第一终端将所述提示信息显示在所述场景图像界面上,包括:
    所述第一终端根据接收到的所述位置信息,在所述第一终端显示的所述场景图像界面的对应位置处显示所述标记信息。
  6. 如权利要求5所述的方法,其特征在于,所述第一终端根据接收到的所述位置信息,在所述第一终端显示的所述场景图像界面的对应位置处显示所述标记信息之后,还包括:
    所述第一终端获取所述第一终端所连接的摄像设备进行移动的第一移动数据;
    所述第一终端将所述第一移动数据转换为所述标记信息进行移动的第二移动数据;
    所述第一终端根据转换后的所述第二移动数据对所述场景图像界面上显示的标记信息进行移动,以使移动后的所述标记信息与移动后的所述摄像设备所拍摄的场景图像界面匹配。
  7. 如权利要求5或6所述的方法,其特征在于,所述第一终端在所述第一终端显示的所述场景图像界面的对应位置处显示所述标记信息,包括:
    所述第一终端在与所述场景图像界面的显示平面平行的图层上的对应位置处显示所述标记信息。
  8. 如权利要求6所述的方法,其特征在于,所述第一终端获取所述第一终端所连接的摄像设备进行移动的第一移动数据,包括:
    所述第一终端通过加速度传感器和陀螺仪传感器,获取所述第一终端所连接的摄像设备进行移动的第一移动数据。
  9. 如权利要求4至8任一权利要求所述的方法,其特征在于,所述标记信息为以下标记中的任一项或任几项的组合:
    曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
  10. 如权利要求3所述的方法,其特征在于,所述场景图像界面包括所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含所述第一终端当前所处场景的位置的GPS地图界面时,所述第一终端显示所述场景图像界面的显示界面上包括第一区域和第二区域;
    其中,所述第一区域用于显示所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,所述第二区域用于显示包含所述第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者
    所述第一区域用于显示包括所述第一终端当前所处场景的位置的GPS地图界面,所述第二区域用于显示所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
  11. 如权利要求10所述的方法,其特征在于,还包括:
    所述第一终端在显示的场景图像界面被触摸时,对所述第一区域与所述第二区域所显示的内容进行切换。
  12. 如权利要求1至11任一权利要求所述的方法,其特征在于,所述第一终端将所述第一终端当前所处的场景的场景图像界面共享给第二终端之前,还包括:
    所述第一终端向所述第二终端发送求助请求;
    所述第一终端接收所述第二终端返回的接受求助响应;
    其中,所述接受求助响应用于使所述第一终端与所述第二终端之间建立界面共享连接。
  13. 如权利要求1至12任一权利要求所述的方法,其特征在于,所述第一终端将所述提示信息显示在所述场景图像界面上之后,还包括:
    所述第一终端接收所述第二终端发送的更新后的提示信息;
    所述第一终端使用所述更新后的提示信息更新显示在所述场景图像界面上的所述提示信息;
    其中,所述更新后的提示信息为对显示在所述场景图像界面上的所述提示信息进行修改之后得到的。
  14. 一种基于场景共享的导航协助方法,其特征在于,包括:
    第二终端接收第一终端共享的所述第一终端当前所处的场景的场景图像界面;
    所述第二终端根据共享的所述场景图像界面,确定出用于提示目标点的位置或者提示到达所述目标点的具体路径的提示信息;
    所述第二终端向所述第一终端发送所述提示信息,以使所述第一终端将所述提示信息显示在所述场景图像界面上。
  15. 如权利要求14所述的方法,其特征在于,所述第二终端确定出用于提示所述目标点的位置或者提示到达所述目标点的具体路径的提示信息之前,还包括:
    第二终端接收第一终端发送的求助信息,所述求助信息包括所述第一终端待寻找的所述目标点的信息;
    所述第二终端根据共享的所述场景图像界面,确定出用于提示所述目标点的位置或者提示到达所述目标点的具体路径的提示信息,具体包括:
    所述第二终端根据共享的所述场景图像界面以及所述求助信息中的所述目标点的信息,确定出用于提示所述目标点的位置或者提示到达所述目标点的具体路径的提示信息。
  16. 如权利要求14或15所述的方法,其特征在于,所述场景图像界面包括:
    所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面;和/或
    包含所述第一终端当前所处场景的位置的GPS地图界面。
  17. 如权利要求14至16任一权利要求所述的方法,其特征在于,所述 提示信息包括下述信息中的至少一种:
    用于提示所述目标点的位置或者提示到达所述目标点的具体路径的标记信息;
    用于提示所述目标点的位置或者提示到达所述目标点的具体路径的文本信息;
    用于提示所述目标点的位置或者提示到达所述目标点的具体路径的音频信息。
  18. 如权利要求17所述的方法,其特征在于,在所述提示信息为用于提示所述目标点的位置或者提示到达所述目标点的具体路径的标记信息时,所述方法还包括:
    所述第二终端向所述第一终端发送所述标记信息在所述场景图像界面中的位置信息;
    其中,所述位置信息用于使所述第一终端根据接收到的所述位置信息在所述第一终端显示的所述场景图像界面的对应位置处显示所述标记信息。
  19. 如权利要求17所述的方法,其特征在于,所述场景图像界面为所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的视频界面;
    所述第二终端根据共享的所述场景图像界面,确定出用于提示所述目标点的位置或者提示到达所述目标点的具体路径的提示信息,包括:
    所述第二终端在接收到的第一操作指令时,将所述第二终端显示的所述视频界面锁定为静态图片;
    所述第二终端在所述静态图片上显示接收到的用于提示所述目标点的位置或者提示到达所述目标点的具体路径的触摸轨迹;
    所述第二终端将所述触摸轨迹设置为所述标记信息,和/或根据所述触摸轨迹生成所述具体路径的文本信息,并将锁定的所述静态图片恢复为所述第一终端共享的所述视频界面。
  20. 如权利要求19所述的方法,其特征在于,所述第一操作指令为所述 第二终端显示的所述视频界面被双击或被单击。
  21. 如权利要求18至20任一权利要求所述的方法,其特征在于,所述第二终端向所述第一终端发送所述标记信息在所述场景图像界面中的位置信息之后,还包括:
    所述第二终端获取所述第一终端所连接的摄像设备进行移动的第一移动数据;
    所述第二终端将所述第一移动数据转换为所述标记信息进行移动的第二移动数据;
    所述第二终端根据转换后的所述第二移动数据对所述场景图像界面上显示的标记信息进行移动,以使移动后的所述标记信息与移动后的所述摄像设备所拍摄的场景图像界面匹配。
  22. 如权利要求18至21任一权利要求所述的方法,其特征在于,所述第二终端确定出所述提示信息之后,还包括:
    所述第二终端在与所述场景图像界面的显示平面平行的图层上的显示所述标记信息。
  23. 如权利要求21所述的方法,其特征在于,所述第一移动数据为所述第一终端通过所述第一终端的加速度传感器和陀螺仪传感器获取的数据。
  24. 如权利要求17至23任一权利要求所述的方法,其特征在于,所述标记信息为以下标记中的任一项或任几项的组合:
    曲线标记信息、直线标记信息、带箭头的线标记信息、封闭图形标记信息。
  25. 如权利要求16所述的方法,其特征在于,所述场景图像界面包括所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面,和包含所述第一终端当前所处场景的位置的GPS地图界面时,所述第二终端显示所述场景图像界面的显示界面上包括第一区域和第二区域;
    其中,所述第一区域用于显示所述第一终端连接的摄像设备对所述第一 终端当前所处的场景进行拍摄所得到的图像界面或视频界面,所述第二区域用于显示包含所述第一终端当前所处场景的位置的GPS地图界面或者不显示内容;或者
    所述第一区域用于显示包括所述第一终端当前所处场景的位置的GPS地图界面,所述第二区域用于显示所述第一终端连接的摄像设备对所述第一终端当前所处的场景进行拍摄所得到的图像界面或视频界面或者不显示内容。
  26. 如权利要求25所述的方法,其特征在于,还包括:
    所述第二终端在显示的场景图像界面被触摸时,对所述第一区域与所述第二区域所显示的内容进行切换。
  27. 如权利要求14至26任一权利要求所述的方法,其特征在于,所述第二终端接收第一终端共享的所述第一终端当前所处的场景的场景图像界面之前,还包括:
    所述第二终端接收所述第一终端发送的求助请求;
    所述第二终端向所述第一终端发送接受求助响应;
    其中,所述接受求助响应用于使所述第一终端与所述第二终端之间建立界面共享连接。
  28. 如权利要求14至27任一权利要求所述的方法,其特征在于,所述第二终端向所述第一终端发送所述提示信息之后,还包括:
    所述第二终端对所述提示信息进行修改,得到更新后的提示信息;
    所述第二终端向所述第一终端发送所述更新后的提示信息,以使所述第一终端使用所述更新后的提示信息更新显示在所述场景图像界面上的所述提示信息。
  29. 一种基于场景共享的导航协助终端,其特征在于,所述终端包括发送器、接收器、存储器和处理器;
    所述存储器用于存储指令,所述处理器用于根据执行所述存储器存储的指令,并控制所述发送器和所述接收进行信号接收和信号发送,当所述处理器执行所述存储器存储的指令时,所述终端用于执行如权利要求1-13任一所 述的方法。
  30. 一种基于场景共享的导航协助终端,其特征在于,所述终端包括发送器、接收器、存储器和处理器;
    所述存储器用于存储指令,所述处理器用于根据执行所述存储器存储的指令,并控制所述发送器和所述接收进行信号接收和信号发送,当所述处理器执行所述存储器存储的指令时,所述终端用于执行如权利要求14-28任一所述的方法。
PCT/CN2016/111558 2016-01-28 2016-12-22 一种基于场景共享的导航协助方法及终端 WO2017128895A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020187008449A KR102046841B1 (ko) 2016-01-28 2016-12-22 장면 공유 기반 내비게이션 지원 방법 및 단말
BR112018008091-8A BR112018008091A2 (zh) 2016-01-28 2016-12-22 A method of navigation assistance based on scene sharing and terminal
US15/923,415 US10959049B2 (en) 2016-01-28 2018-03-16 Scene sharing-based navigation assistance method and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610058850.6A CN107015246B (zh) 2016-01-28 2016-01-28 一种基于场景共享的导航协助方法及终端
CN201610058850.6 2016-01-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/923,415 Continuation US10959049B2 (en) 2016-01-28 2018-03-16 Scene sharing-based navigation assistance method and terminal

Publications (1)

Publication Number Publication Date
WO2017128895A1 true WO2017128895A1 (zh) 2017-08-03

Family

ID=59397348

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/111558 WO2017128895A1 (zh) 2016-01-28 2016-12-22 一种基于场景共享的导航协助方法及终端

Country Status (5)

Country Link
US (1) US10959049B2 (zh)
KR (1) KR102046841B1 (zh)
CN (1) CN107015246B (zh)
BR (1) BR112018008091A2 (zh)
WO (1) WO2017128895A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427528A (zh) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 显示方法、装置和电子设备

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580691B (zh) * 2014-12-02 2018-06-19 惠州Tcl移动通信有限公司 一种短信删除方法及终端
CN107995420B (zh) * 2017-11-30 2021-02-05 努比亚技术有限公司 远程合影控制方法、双面屏终端及计算机可读存储介质
CN108076437A (zh) * 2018-01-01 2018-05-25 刘兴丹 一种含有图片、定位信息及移动轨迹的地图软件的方法、装置
CN111132000B (zh) * 2018-10-15 2023-05-23 上海博泰悦臻网络技术服务有限公司 一种位置共享的方法及系统
CN110455304A (zh) * 2019-08-05 2019-11-15 深圳市大拿科技有限公司 车辆导航方法、装置及系统
US11599253B2 (en) * 2020-10-30 2023-03-07 ROVl GUIDES, INC. System and method for selection of displayed objects by path tracing
CN113065456A (zh) * 2021-03-30 2021-07-02 上海商汤智能科技有限公司 信息提示方法、装置、电子设备和计算机存储介质
KR20230014479A (ko) * 2021-07-21 2023-01-30 삼성전자주식회사 통합 화면 표시 방법 및 이를 지원하는 전자 장치
CN113739801A (zh) * 2021-08-23 2021-12-03 上海明略人工智能(集团)有限公司 用于侧边栏的导航路线获取方法、系统、介质及电子设备
US20230161539A1 (en) * 2021-11-23 2023-05-25 International Business Machines Corporation Assisted collaborative navigation in screen share environments

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607579A (zh) * 2012-03-14 2012-07-25 深圳市赛格导航科技股份有限公司 一种车载导航终端及系统
CN102801653A (zh) * 2012-08-15 2012-11-28 上海量明科技发展有限公司 通过即时通信圈子导航的方法及系统
CN103185583A (zh) * 2011-12-30 2013-07-03 上海博泰悦臻电子设备制造有限公司 车辆位置分享方法、车辆位置分享系统
CN104539667A (zh) * 2014-12-16 2015-04-22 深圳市华宝电子科技有限公司 基于北斗系统的车辆位置共享方法及装置
CN104613971A (zh) * 2014-12-31 2015-05-13 苏州佳世达光电有限公司 一种导航信息共享系统及方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4026071B2 (ja) 2003-09-25 2007-12-26 ソニー株式会社 車載装置及びコンテンツ提供方法
US9766089B2 (en) 2009-12-14 2017-09-19 Nokia Technologies Oy Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image
CN103002244B (zh) 2011-09-09 2016-03-30 联想(北京)有限公司 一种交互式视频通话的方法和通话终端
WO2012097616A2 (zh) 2011-10-24 2012-07-26 华为终端有限公司 终端位置共享的方法和终端设备
KR101634321B1 (ko) * 2011-11-04 2016-06-28 한국전자통신연구원 멀티모드 경로 검색 장치 및 방법
WO2013074919A2 (en) * 2011-11-16 2013-05-23 Flextronics Ap , Llc Universal bus in the car
CN103968822B (zh) * 2013-01-24 2018-04-13 腾讯科技(深圳)有限公司 导航方法、用于导航的设备和导航系统
JP6097679B2 (ja) * 2013-02-28 2017-03-15 エルジー アプラス コーポレーション 端末間機能共有方法及びその端末
KR102010298B1 (ko) * 2013-05-21 2019-10-21 엘지전자 주식회사 영상표시장치 및 영상표시장치의 동작방법
CN103347046B (zh) * 2013-06-06 2017-03-01 百度在线网络技术(北京)有限公司 一种基于位置的信息交互方法及服务器
CN103383262A (zh) * 2013-07-11 2013-11-06 北京奇虎科技有限公司 电子地图路线指引的方法和系统
KR102222336B1 (ko) * 2013-08-19 2021-03-04 삼성전자주식회사 맵 화면을 디스플레이 하는 사용자 단말 장치 및 그 디스플레이 방법
US9244940B1 (en) 2013-09-27 2016-01-26 Google Inc. Navigation paths for panorama
CN104618854B (zh) * 2014-05-20 2018-07-10 腾讯科技(深圳)有限公司 共享位置信息的方法、终端及服务器
CN104297763A (zh) 2014-10-22 2015-01-21 成都西可科技有限公司 一种基于手机的远程跟踪指路系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103185583A (zh) * 2011-12-30 2013-07-03 上海博泰悦臻电子设备制造有限公司 车辆位置分享方法、车辆位置分享系统
CN102607579A (zh) * 2012-03-14 2012-07-25 深圳市赛格导航科技股份有限公司 一种车载导航终端及系统
CN102801653A (zh) * 2012-08-15 2012-11-28 上海量明科技发展有限公司 通过即时通信圈子导航的方法及系统
CN104539667A (zh) * 2014-12-16 2015-04-22 深圳市华宝电子科技有限公司 基于北斗系统的车辆位置共享方法及装置
CN104613971A (zh) * 2014-12-31 2015-05-13 苏州佳世达光电有限公司 一种导航信息共享系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427528A (zh) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 显示方法、装置和电子设备
CN111427528B (zh) * 2020-03-20 2023-07-25 北京字节跳动网络技术有限公司 显示方法、装置和电子设备

Also Published As

Publication number Publication date
BR112018008091A2 (zh) 2018-10-23
CN107015246B (zh) 2021-03-30
US20180213358A1 (en) 2018-07-26
KR20180044380A (ko) 2018-05-02
CN107015246A (zh) 2017-08-04
US10959049B2 (en) 2021-03-23
KR102046841B1 (ko) 2019-11-20

Similar Documents

Publication Publication Date Title
WO2017128895A1 (zh) 一种基于场景共享的导航协助方法及终端
US9699373B2 (en) Providing navigation information to a point of interest on real-time street views using a mobile device
US11060880B2 (en) Route planning method and apparatus, computer storage medium, terminal
JP2016048247A (ja) ローカル・マップ及び位置特有の注釈付きデータを提供するための人間援助型の技術
JP6296056B2 (ja) 画像処理装置、画像処理方法及びプログラム
US9664527B2 (en) Method and apparatus for providing route information in image media
US10989559B2 (en) Methods, systems, and devices for displaying maps
US9909878B2 (en) Method and apparatus for triggering conveyance of guidance information
KR101413011B1 (ko) 위치 정보 기반 증강현실 시스템 및 제공 방법
US20120303265A1 (en) Navigation system with assistance for making multiple turns in a short distance
CN112433211B (zh) 一种位姿确定方法及装置、电子设备和存储介质
JP2010276364A (ja) ナビ情報作成装置、ナビゲーションシステム、ナビゲーション情報作成方法、およびナビゲーション方法
WO2022237071A1 (zh) 定位方法及装置、电子设备、存储介质和计算机程序
JP5527005B2 (ja) 位置推定装置、位置推定方法及び位置推定プログラム
US9596404B2 (en) Method and apparatus for generating a media capture request using camera pose information
JP4611400B2 (ja) ナビゲーション支援装置
So-In et al. A new mobile phone system architecture for the navigational travelling blind
JP2005164430A (ja) ナビゲーション装置及びそのプログラム
JP2014142687A (ja) 情報処理装置、情報処理方法、およびプログラム
JPH09145399A (ja) 携帯用通信案内装置及び通信案内方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16887766

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20187008449

Country of ref document: KR

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112018008091

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112018008091

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20180420

122 Ep: pct application non-entry in european phase

Ref document number: 16887766

Country of ref document: EP

Kind code of ref document: A1