WO2017054358A1 - 导航方法及装置 - Google Patents

导航方法及装置 Download PDF

Info

Publication number
WO2017054358A1
WO2017054358A1 PCT/CN2015/099732 CN2015099732W WO2017054358A1 WO 2017054358 A1 WO2017054358 A1 WO 2017054358A1 CN 2015099732 W CN2015099732 W CN 2015099732W WO 2017054358 A1 WO2017054358 A1 WO 2017054358A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
video
end point
path navigation
target
Prior art date
Application number
PCT/CN2015/099732
Other languages
English (en)
French (fr)
Inventor
刘国明
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to KR1020167007053A priority Critical patent/KR101870052B1/ko
Priority to MX2016004100A priority patent/MX368765B/es
Priority to JP2017542265A priority patent/JP6387468B2/ja
Priority to RU2016112941A priority patent/RU2636270C2/ru
Publication of WO2017054358A1 publication Critical patent/WO2017054358A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3647Guidance involving output of stored or live camera images or video streams
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Definitions

  • the present disclosure relates to the field of navigation technologies, and in particular, to a navigation method and apparatus.
  • navigation is basically performed through maps and positioning information; for indoor navigation, an infrared sensing device is usually manually established in advance, and then the infrared sensing device established in advance can locate the user currently located. The location, based on the user's starting position and ending position, determines the navigation path and navigates based on the user's current location and the navigation path.
  • the present disclosure provides a navigation method and apparatus.
  • a navigation method comprising:
  • the acquiring, by using the starting point information and the destination information, a target path navigation video from a start point to an end point including:
  • the target path navigation video is acquired based on the start position information and the end position information, the start point information includes the start position information, and the end point information includes the end position information.
  • the starting point information includes a starting point environment image
  • the end point information includes an end point environment image
  • the acquiring the target path navigation video based on the start position information and the end position information includes:
  • the starting point information includes a starting point environment image
  • the end point information includes an end point environment image
  • the acquiring the target path navigation video based on the start position information and the end position information includes:
  • the acquiring, by using the starting point information and the destination information, the target path navigation video from the start point to the end point including:
  • the acquiring, by using the starting point information and the destination information, a target path navigation video from a start point to an end point including:
  • the method further includes: before the obtaining the target path navigation video from the start point to the end point, the method further includes:
  • the acquiring the candidate path navigation video includes:
  • the location information is location information corresponding to the target image that is collected by the video capture device during the process of acquiring the mobile video
  • the location information is associated with the target image to obtain a candidate path navigation video.
  • the location information includes reference information or text information.
  • the method further includes:
  • a navigation method comprising:
  • the target path navigation video is played in response to the received navigation triggering operation.
  • the starting point information includes a starting point environment image
  • the end point information includes an end point environment image
  • the obtaining start point information and the end point information include:
  • the start environment image and the end environment image are acquired.
  • the playing the target path navigation video includes:
  • the playing the target path navigation video includes:
  • the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated;
  • a route re-planning request is sent to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
  • the method further includes:
  • Transmitting the mobile video and the location information to the server causing the server to associate the mobile video with a target image, where the location information is collected during a process of collecting a mobile video while in a stationary state The location information corresponding to the target image.
  • the method further includes:
  • a navigation device comprising:
  • a first receiving module configured to receive start point information and end point information sent by the target device
  • a first acquiring module configured to acquire, according to the starting point information and the end point information received by the first receiving module, a target path navigation video from a starting point to an ending point;
  • a first sending module configured to send, to the target device, the target path navigation video acquired by the acquiring module.
  • the first acquiring module includes:
  • a first acquiring unit configured to acquire the target path navigation video based on the start position information and the end position information, where the start point information includes the start position information, and the end point information includes the end position information.
  • the starting point information includes a starting point environment image
  • the end point information includes an end point environment image
  • the first obtaining unit includes:
  • a first extracting subunit configured to extract starting point reference information from the starting point environment image, and extract end point reference object information from the end point environment image;
  • a first determining subunit configured to determine the starting point reference information extracted by the first extracting subunit as starting point position information, and determine the end point reference object information extracted by the extracting subunit as end point position information ;
  • a first acquiring subunit configured to acquire the target path navigation video based on the starting reference object information and the end point reference information determined by the first determining subunit.
  • the starting point information includes a starting point environment image
  • the end point information includes an end point environment image
  • the first obtaining unit includes:
  • a second extracting subunit configured to extract starting point text information from the starting point environment image, and extract end point text information from the end point environment image
  • a second determining subunit configured to determine the start point text information extracted by the second extracting subunit as starting point position information, and determine the end point text information extracted by the second extracting subunit as end point position information ;
  • a second acquiring subunit configured to acquire the target path navigation video based on the start point text information and the end point text information determined by the second determining subunit.
  • the first acquiring module includes:
  • an intercepting unit configured to intercept the target path navigation video from a stored candidate path navigation video based on the start point information and the end point information.
  • the first acquiring module includes:
  • a second acquiring unit configured to acquire the target path navigation video from the stored plurality of candidate path navigation videos based on the start point information and the end point information.
  • the device further includes:
  • the second obtaining module is configured to acquire a candidate path navigation video.
  • the second acquiring module includes:
  • a third acquiring unit configured to acquire mobile video and location information, where the location information is location information corresponding to the target image that is collected when the video capture device is in a static state during the process of acquiring the mobile video;
  • an association unit configured to associate the location information acquired by the third acquiring unit with the target image to obtain a candidate path navigation video.
  • the location information includes reference information or text information.
  • the device further includes:
  • a second receiving module configured to receive a path re-planning request sent by the target device
  • a third acquiring module configured to acquire a new target path navigation video based on the path re-planning request received by the second receiving module
  • a second sending module configured to send the new target path navigation video acquired by the third acquiring module to the target device, so that the target device performs navigation based on the new target path navigation video.
  • a navigation apparatus comprising:
  • a first obtaining module configured to acquire start point information and end point information
  • a first sending module configured to send, to the server, the starting point information and the end point information acquired by the first acquiring module
  • a receiving module configured to receive a target path navigation video sent by the server from a start point to an end point, where the target path navigation video is obtained by the server based on the start point information and the end point information sent by the first sending module get;
  • a playing module configured to play the target path navigation video received by the receiving module in response to the received navigation triggering operation.
  • the starting point information includes a starting point environment image
  • the end point information includes an end point environment image
  • the first obtaining module includes:
  • an obtaining unit configured to acquire a starting environment image and an ending environment image when receiving the navigation instruction.
  • the playing module includes:
  • a detecting unit configured to detect a current moving speed
  • a playing unit configured to play the target path navigation video based on the motion speed detected by the detecting unit, so that a playing speed of the target path navigation video is equal to the moving speed.
  • the playing module includes:
  • a display unit configured to display a route confirmation when playing to a target image position in the target path navigation video a prompt information, where the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated;
  • a sending unit configured to: when the route re-planning instruction is received based on the route confirmation prompt information displayed by the display unit, send a route re-planning request to the server, so that the server acquires a new request based on the path re-planning request Target path navigation video.
  • the device further includes:
  • a second acquiring module configured to acquire mobile video and location information
  • a second sending module configured to send the mobile video and the location information received from the second acquiring module to the server, so that the server associates the mobile video with a target image, where the location information is The position information corresponding to the target image acquired when the camera is in a stationary state during the process of collecting the moving video.
  • the device further includes:
  • a third obtaining module configured to acquire mobile video and location information
  • An association module configured to associate the mobile video acquired by the third acquiring module with a target image to obtain a candidate path navigation video, where the location information is the target collected when the mobile video is in a stationary state Location information corresponding to the image;
  • a third sending module configured to send the candidate path navigation video associated with the association module to the server.
  • a navigation apparatus comprising:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • a navigation apparatus comprising:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the target path navigation video is played in response to the received navigation triggering operation.
  • the technical solution provided by the embodiment of the present disclosure may include the following beneficial effects: in the embodiment of the present disclosure, the start point information and the end point information sent by the target device are received, and the target path navigation video from the start point to the end point is acquired based on the start point information and the end point information. Sending the target path navigation video to the target device, so that the target device navigates based on the target path navigation video, thereby more intuitively navigating, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, and having universality and applicability. And save a lot of physical equipment and labor.
  • FIG. 1 is a flow chart showing a navigation method according to an exemplary embodiment.
  • FIG. 2 is a flow chart showing a navigation method according to an exemplary embodiment.
  • FIG. 3 is a flowchart of a navigation method according to an exemplary embodiment.
  • FIG. 4 is a block diagram of a navigation device, according to an exemplary embodiment.
  • FIG. 5 is a block diagram of a first acquisition module, according to an exemplary embodiment.
  • FIG. 6 is a block diagram of a first acquisition unit, according to an exemplary embodiment.
  • FIG. 7 is a block diagram of a first obtaining unit, according to an exemplary embodiment.
  • FIG. 8 is a block diagram of a first acquisition module, according to an exemplary embodiment.
  • FIG. 9 is a block diagram of a first acquisition module, according to an exemplary embodiment.
  • FIG. 10 is a block diagram of a navigation device, according to an exemplary embodiment.
  • FIG. 11 is a block diagram of a second acquisition module, according to an exemplary embodiment.
  • FIG. 12 is a block diagram of a navigation device, according to an exemplary embodiment.
  • FIG. 13 is a block diagram of a navigation device, according to an exemplary embodiment.
  • FIG. 14 is a block diagram of a first acquisition module, according to an exemplary embodiment.
  • FIG. 15 is a block diagram of a playback module, according to an exemplary embodiment.
  • FIG. 16 is a block diagram of a playback module, according to an exemplary embodiment.
  • FIG. 17 is a block diagram of a navigation device, according to an exemplary embodiment.
  • FIG. 18 is a block diagram of a navigation device, according to an exemplary embodiment.
  • FIG. 19 is a block diagram of an apparatus for navigation, according to an exemplary embodiment.
  • FIG. 20 is a block diagram of an apparatus for navigation, according to an exemplary embodiment.
  • FIG. 1 is a flowchart of a navigation method according to an exemplary embodiment. As shown in FIG. 1 , the navigation method is used in a server, and includes the following steps.
  • step 101 start point information and end point information transmitted by the target device are received.
  • step 102 based on the start point information and the end point information, a target path navigation video from the start point to the end point is acquired.
  • step 103 the target path navigation video is sent to the target device.
  • the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
  • the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
  • the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
  • the target path navigation video is acquired based on the start position information and the end position information, and the start point information includes the start position information, and the end point information includes the end position information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • Obtaining the target path navigation video based on the start position information and the end position information including:
  • the target path navigation video is acquired based on the starting point reference information and the end point reference information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • Obtaining the target path navigation video based on the start position information and the end position information including:
  • the target path navigation video is acquired based on the start point text information and the end point text information.
  • the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
  • the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
  • the method before acquiring the target path navigation video from the start point to the end point based on the start point information and the end point information, the method further includes:
  • acquiring a candidate path navigation video includes:
  • the location information is location information corresponding to the target image collected by the video capture device during the process of acquiring the mobile video
  • the location information is associated with the target image to obtain a candidate path navigation video.
  • the location information includes reference information or text information.
  • the method further includes:
  • FIG. 2 is a flowchart of a navigation method according to an exemplary embodiment. As shown in FIG. 2, the navigation method is used in a target device, and includes the following steps.
  • step 201 start point information and end point information are acquired.
  • step 202 the start point information and the end point information are sent to the server.
  • step 203 the target path navigation video sent from the start point to the end point sent by the server is received, and the target path navigation video is obtained by the server based on the start point information and the end point information.
  • the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
  • the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • Get start and end points including:
  • the start environment image and the end environment image are acquired.
  • playing the target path navigation video includes:
  • the target path navigation video is played such that the playback speed of the target path navigation video is equal to the motion speed.
  • playing the target path navigation video includes:
  • the route confirmation prompt information is displayed, and the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated, and the location information is that the video collection device is collecting mobile.
  • a route re-planning request is sent to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
  • the method further includes:
  • the mobile video and the location information are sent to the server, causing the server to associate the mobile video with the location information.
  • the method further includes:
  • the candidate path navigation video is sent to the server.
  • FIG. 3 is a flowchart of a navigation method according to an exemplary embodiment. As shown in FIG. 3, the method includes the following steps.
  • step 301 the target device acquires the start point information and the end point information, and transmits the start point information and the end point information to the server.
  • the start point information and the end point information may be not only text information, image information, voice information, and the like, but also a combination of at least two pieces of information such as text information, image information, and voice information. This example does not specifically limit this.
  • the start point information and the end point information are image information, that is, the start point information includes a start point environment image, and the end point information includes an end point environment image.
  • the target device may acquire the start point environment image and The end point environment image is used to determine the start point environment image as the start point information, and the end point environment image is determined as the end point information, and the start point information and the end point information are transmitted to the server.
  • the target device when the target device acquires the image of the starting environment, the target device may perform the environment at the current location. Image capture, get the starting environment image.
  • the image In order to improve the effective utilization of the image of the starting point environment, when the target device performs image capturing on the environment at the current location, the image may be captured at the position where the text information exists or the position where the reference object exists at the current location, and the image of the starting environment is obtained.
  • the text information is a more conspicuously-characterized text at the current location of the target device, and the text information is used to identify the current location of the target device, and the reference object may be a building, a bus stop, etc. This is not specifically limited.
  • the target device obtains the image of the endpoint environment
  • the user can directly search for the image of the image stored by the target device.
  • the target device can also obtain the image of the endpoint environment from the server.
  • the target device when the target device acquires the endpoint environment image from the server, the target device may receive the endpoint image description information input by the user, and send the endpoint image description information to the server, and when the server receives the endpoint image description information, the server may Acquiring at least one image matching the end image description information and transmitting the at least one image to the target device; and when the target device receives the at least one image, displaying the at least one image, and When the selection instruction of the specified image is received, the specified image may be determined as an end point environment image, which is any one of the at least one image.
  • the end point image description information may be not only text information, voice information, and the like, but also a combination of at least two pieces of information such as text information and voice information, which is not specifically limited in the embodiment of the present disclosure.
  • a selection instruction of the specified image is used to select a specified image from the at least one image, and the selection instruction of the specified image may be triggered by a user, and the user may trigger by a specified operation, which may be a click operation, a sliding operation, or a voice.
  • a specified operation which may be a click operation, a sliding operation, or a voice.
  • the operation and the like are not specifically limited in the embodiment of the present disclosure.
  • the target device searches for the endpoint environment image directly from the image library stored by the target device, the image library needs to be stored in the target device, and when the image in the image library is large, the storage of the target device is occupied.
  • the space is large, and in order to achieve navigation, each of the devices that need to be navigated needs to store the image library; and when the target device obtains the image of the endpoint environment from the server, since the image library is stored in the server, all the devices that need to navigate are It can be obtained from the server, which saves the storage space of the device, but the device needs to interact with the server, increasing the number of interactions and interaction time. Therefore, in an actual application, different acquisition modes may be selected for different needs, which are not specifically limited in this embodiment of the present disclosure.
  • the target device may be a smart glasses, a smart phone, a smart watch, or the like, which is not specifically limited in this embodiment of the present disclosure.
  • the navigation instruction is used for navigation, and the navigation instruction may be triggered by the user, and the user may trigger the operation by a specified operation, which is not specifically limited in the embodiment of the present disclosure.
  • the navigation method provided by the embodiment of the present disclosure can be applied not only to indoor navigation but also to outdoor navigation, and the embodiment of the present disclosure also does not specifically limit this.
  • the navigation method provided by the embodiment of the present disclosure can be applied not only to indoor navigation but also to outdoor navigation, while the indoor navigation is to navigate the indoor location of the current location, and the indoor location of the current location generally needs to pass.
  • the location of the current location is obtained, and the location of the current location can be determined by the geographical location information of the current location. Therefore, in order to improve the accuracy of the indoor navigation, the target device can determine the current location.
  • the geographic location information of the location which in turn sends the geographic location information of the current location to the server.
  • the outdoor navigation is to realize navigation of two different outdoor locations, and the two different outdoor locations are generally determined by location information, that is, the outdoor navigation needs to determine the starting geographical location information and the ending geographical location information, therefore, in order to improve The accuracy of the outdoor navigation, the target device can also determine the geographical location information of the current location, determine the geographical location information of the current location as the starting geographical location information, and determine the geographic location information of the destination, and then the geographic location information and the destination point Geographic location information is sent to the server.
  • the target device may receive the end position description information input by the user, and send the end position description information to the server.
  • the server receives the end position description information, Obtaining at least one geographic location information that matches the endpoint description information, and transmitting the at least one geographic location information to the target device; when the target device receives the at least one geographic location information, the at least one geographic location information may be displayed And when the selection instruction of the specified geographical location information is received, the specified geographical location information may be determined as the ending geographical location information, where the designated geographical location information is any one of the at least one geographic location information.
  • the target device determines the geographical location information of the current location, it may be determined by GPS (Global Positioning System) positioning, manual input, or combination of GPS positioning and manual input, and the geographic location is determined.
  • the information may be text information, voice information, or a combination of text information and voice information, which is not specifically limited in the embodiment of the present disclosure.
  • the selection instruction of the specified geographic location information is used to select the specified geographic location information from the at least one geographic location information, where the selection instruction of the specified geographic location information may be triggered by the user, and the user may trigger by the specified operation, the embodiment of the present disclosure This is not specifically limited.
  • step 302 when the server receives the start point information and the end point information, the server acquires a target path navigation video from the start point to the end point based on the start point information and the end point information.
  • the embodiment of the present disclosure navigates through the navigation video, that is, when the server receives the start point information and the end point information, the server needs to acquire the target from the start point to the end point based on the start point information and the end point information.
  • Path navigation video The server obtains the target path navigation video based on the start position information and the end position information based on the start point information and the end point information, and obtains the target path navigation video, and the start point information includes the start position information and the end point information.
  • the end position information is included.
  • the starting point information mentioned in the above step 301 may include a starting point environment image
  • the end point information includes an end point environment image
  • the starting point position information and the ending position information may be not only reference object information but also text information, and of course, may also be GPS information.
  • the embodiment of the present disclosure uses the starting point position information and the ending position information as the reference information or the text information as an example.
  • the manner in which the server obtains the target path navigation video based on the starting position information and the ending position information may include the following two. Ways:
  • the server extracts the starting point reference object information from the starting point environment image, and extracts the ending point reference object information from the end point environment image, determines the starting point reference object information as the starting point position information, and determines the ending point reference object information as the ending position position.
  • the information acquires a target path navigation video based on the start point reference object information and the end point reference object information.
  • the server may obtain the target path navigation video based on the starting point reference object information and the end point reference object information, and the server may intercept the target path navigation from the stored one candidate path navigation video based on the starting point reference object information and the end point reference object information. video. Alternatively, the server may acquire the target path navigation video from the stored plurality of candidate path navigation videos based on the start reference object information and the end point reference information.
  • the server may intercept the target path navigation video from the stored candidate path navigation video based on the starting reference object information and the endpoint reference information, and the server may parse the multi-frame video image from the candidate path navigation video, and Extracting one candidate reference object information from the multi-frame video picture to obtain a plurality of candidate reference object information, among which a candidate reference object information identical to the start point reference object information is selected, and the selected candidate is selected.
  • the video picture in which the reference information is located is determined as a start point video picture, and from the plurality of candidate reference object information, the same candidate reference object information as the end point reference object information is selected, and the video picture in which the selected candidate reference object information is located is determined.
  • the server then intercepts the video between the start video picture and the end video picture from the candidate path navigation video to obtain the target path navigation video.
  • the server When the server obtains the target path navigation video from the stored plurality of candidate path navigation videos based on the starting point reference information and the end point reference information, the plurality of candidate path navigation videos each include a starting point candidate reference information and an end point candidate. Referring to the object information, the server can obtain the starting point candidate of the plurality of candidate path navigation videos when the target path navigation video is selected from the stored plurality of candidate path navigation videos based on the start reference object information and the end point reference information.
  • Reference object information and end point candidate reference object information based on the start point candidate reference object information of the plurality of candidate path navigation videos, from the plurality of candidate path navigation videos, selecting the candidate path with the same starting point candidate reference information and the starting reference object information Navigating a video; determining whether the end point candidate reference object information of the selected candidate path navigation video is the same as the end point reference object information; and selecting the candidate when the end point candidate reference object information of the selected candidate path navigation video is different from the end point reference object information
  • the end of the path navigation video The candidate reference object information is used as the starting point reference information, and the starting point candidate reference object information based on the plurality of candidate path navigation videos is returned, and the starting point candidate reference object information and the starting point reference object information are selected from the plurality of candidate path navigation videos.
  • the server may select at least one candidate path navigation video on the target path from the plurality of candidate path navigation videos, and compose the at least one candidate path navigation video into the target path navigation video.
  • the server extracts the starting point text information from the starting point environment image, and extracts the ending point text information from the end point environment image, determines the starting point text information as the starting point position information, and determines the ending point text information as the ending position information, based on the starting point. Text information and end point text information, get the target path navigation video.
  • the server can perform character recognition on the starting environment image, obtain the starting point text information, and perform character recognition on the end point environment image to obtain the ending text information, based on the starting point text information and the ending text information, from a stored candidate path. Capture the target path navigation video in the navigation video.
  • the server may perform text recognition on the starting point environment image, obtain starting point text information, and perform character recognition on the end point environment image to obtain end point text information, and based on the starting point text information and the ending point text information, from the stored plurality of candidate path navigation videos. , get the target path navigation video.
  • the server may intercept the target path navigation video from the stored candidate path navigation video based on the start text information and the end point text information, and the server may parse the multi-frame video image from the candidate path navigation video, and One candidate text information is extracted from each of the multi-frame video images to obtain a plurality of candidate text information, among which the candidate text information identical to the start text information is selected, and the video image of the selected candidate text information is determined. And selecting, from the plurality of candidate text information, candidate text information identical to the end point text information, and determining a video screen in which the selected candidate text information is located as an end video screen, after which the server selects the candidate In the path navigation video, the video between the start video screen and the end video image is intercepted, and the target path navigation video is obtained.
  • the plurality of candidate path navigation videos may each include start point candidate text information and end point candidate text information. Therefore, the server scans the plurality of candidate path navigation videos based on the start point text information and the end point text information.
  • the server may obtain start point candidate text information and end point candidate text information of the plurality of candidate path navigation videos; and navigate from the plurality of candidate paths based on the start candidate candidate text information of the plurality of candidate path navigation videos In the video, the candidate path navigation video with the same start point candidate text information and the start point text information is selected; whether the end point candidate text information of the selected candidate path navigation video and the end point text information are the same; and the candidate candidate path navigation video end point candidate text information is selected When the end point text information is different from the end point text information, the end point candidate text information of the selected candidate path navigation video is used as the starting point text information, and the start point candidate text information based on the plurality of candidate path navigation videos is returned, and the video is navigated from the plurality of candidate
  • the server may select at least one candidate path navigation video on the target path from the plurality of candidate path navigation videos, and compose the at least one candidate path navigation video into the target path navigation video.
  • the server performs character recognition on the image of the starting point environment, and obtains the starting point text information as A, and performs character recognition on the image of the end point environment, and obtains the end point text information as F.
  • the plurality of candidate path navigation videos acquired by the server are respectively the navigation video 21, The navigation video 22, the navigation video 23, the navigation video 24, and the navigation video 25, and the start point candidate text information of the navigation video 21 is A, the end point candidate text information is B, and the start point candidate text information of the navigation video 22 is D, the end point candidate text information.
  • the start point candidate text information of the navigation video 23 is B
  • the end point candidate text information is D
  • the start point candidate text information of the navigation video 24 is G
  • the end point candidate text information is H
  • the start point candidate text information of the navigation video 25 is M
  • the destination candidate text information is N
  • the server selects the candidate path navigation video having the same starting point candidate text information and the starting point text information as the navigation video 21 and the end point of the navigation video 21 from the five candidate path navigation videos based on the starting point text information A.
  • the candidate text information B is different from the final text information F, so the server will navigate
  • the end point candidate character information B of the video 21 is used as the start point character information, and among the five candidate route navigation videos, the candidate route navigation video having the same start point candidate character information and the start point text information is selected as the navigation video 23, and the end point candidate of the navigation video 23 Since the character information D is different from the end point text information F, the server selects the end point candidate character information D of the navigation video 23 as the start point text information, and selects the candidate path in which the start point candidate text information and the start point text information are the same from the five navigation videos.
  • the navigation video is a navigation video 22, and the endpoint candidate for the navigation video 22
  • the text information F is the same as the end text information F.
  • the server may determine the navigation video 21, the navigation video 23, and the navigation video 22 as at least one candidate path navigation video on the target path, and navigate the video based on the at least one candidate path. Make up the target path navigation video.
  • the server can obtain the target path navigation video separately according to the above two methods.
  • the server can also combine the above two methods to obtain the target path navigation video, thereby improving the acquisition accuracy of the target path navigation video.
  • the above-mentioned starting point information and the end point information may be not only text information, image information, but also GPS information, etc., therefore, the server can not only obtain the target path navigation video through the starting point position information and the ending position information by the above method,
  • the server may also intercept the target path navigation video from a stored candidate path navigation video based on the start point information and the end point information.
  • the server may acquire the target path navigation video from the stored plurality of candidate path navigation videos based on the start point information and the end point information.
  • the method for intercepting the target path navigation video from the stored one candidate path navigation video may be the same as the foregoing method, and the server may navigate the video from the stored multiple candidate paths based on the start point information and the end point information based on the start point information and the end point information.
  • the method for obtaining the target path navigation video may also be the same as the above method, which is not described in detail in the embodiment of the present disclosure.
  • the server may acquire the candidate path navigation video before acquiring the target path navigation video from the start point to the end point based on the start point information and the end point information.
  • the server may obtain the mobile video and the location information, where the location information is the location information corresponding to the target image that is collected when the video capture device is in the static state during the process of collecting the mobile video, and then The server associates the location information with the target image to obtain a candidate path navigation video.
  • the location information may include reference information or text information.
  • the location information may also include other information, such as geographic location information, which is not specifically limited in the embodiment of the present disclosure.
  • the server may further base information based on the starting point and the geographic location information of the destination. From a stored candidate navigation video, intercept the navigation video between the starting point and the ending geographic location, and intercept the target path navigation video from the intercepted navigation video according to the above method. At this time, the candidate path navigation video will be Associate the geographic location information of each video frame.
  • the server may also select the location geography from the plurality of candidate path navigation videos based on the starting point geographic location information and the ending geographic location information. The candidate path navigation video between the location and the destination geographic location, and from the selected candidate path navigation video, obtain the target path navigation video according to the above method.
  • the plurality of candidate path navigation videos may be the candidate path navigation videos of the plurality of locations, that is, the plurality of candidate path navigation videos may be the candidate path navigation videos corresponding to the plurality of geographic location information.
  • the candidate path navigation video corresponding to the plurality of geographic location information is stored in the plurality of candidate path navigation videos, the correspondence between the geographical location information and the candidate path navigation video is generally stored, and the location of each geographic location information may be Will include multiple indoor spaces, so in order to navigate indoors within the location of the geographic location information, and improve The accuracy of the indoor navigation, when the server receives the geographical location information of the current location of the target device, the server may obtain the geographical location from the correspondence between the geographic location information and the candidate path navigation video based on the geographical location information. The plurality of candidate path navigation videos corresponding to the information, and further acquiring the target path navigation video from the plurality of candidate path navigation videos corresponding to the geographic location information.
  • the mobile video and location information acquired by the server may be sent by multiple video collection devices, or may be sent by the server. Is sent by a video collection device, and when the location information is geographical location information, the location information corresponding to each target image in the mobile video is the same, therefore, the server can receive the mobile video and the geographic information sent by the at least one video collection device.
  • the server may identify a plurality of target images from the mobile video sent by the video capture device; and based on the multiple target images, the video capture device The transmitted mobile video is decomposed to obtain a plurality of candidate path navigation videos; and the plurality of candidate path navigation videos are stored based on the geographic location information of the video collection device.
  • the moving video can include a plurality of target images, and each target image is obtained by capturing an indoor location where the reference information or the text information exists, the plurality of target images identified from the moving video can be distinguished.
  • a plurality of indoor locations that is, the mobile video can identify a plurality of indoor locations.
  • the server may decompose the mobile video based on the plurality of target images in the mobile video to obtain a plurality of candidate path navigation videos.
  • a plurality of video images may be included in one video, and at least two video images of the same and continuous image are present in the multi-frame video image, at least two video images of the same and consecutive images may be determined as the target image. Therefore, when the server identifies a plurality of target images from the mobile video sent by the video capture device, the server may acquire the multi-frame video image included in the mobile video, and perform adjacent video images in the multi-frame video image. Comparing, when there are at least two frames of video images of the same and continuous image in the multi-frame video image, the server may determine the same and consecutive at least two frames of video images of the image as the target image, thereby identifying more from the mobile video. Target images.
  • the server may further determine a similarity of the adjacent at least two video images in the multi-frame video image, and determine the adjacent at least two video images when the similarity of the adjacent at least two video images is greater than the specified similarity.
  • the target image for the mobile video may be set in advance, for example, the specified similarity may be 80%, 90%, and the like, which is not specifically limited in the embodiment of the present disclosure.
  • the mobile video sent by the video capture device is mobile video 1
  • the server obtains the multi-frame video images included in the mobile video 1 as image 1, image 2, image 3, image 4, image 5, image 6, image 7, Image 8, image 9, image 10, image 50, and compare adjacent video images in the multi-frame video image to determine that image 1, image 2, and image 3 in the multi-frame video image are continuous and the images are the same Video image, determining that image 8 and image 9 are continuous and identical video images, determining that image 15, image 16 and image 17 are continuous and image phase
  • the same video image it is determined that the image 22, the image 23, and the image 24 are continuous and identical video images, and the image 30 and the image 31 are determined to be continuous and the same image of the video, and the image 43, the image 44, and the image 45 are determined to be continuous and Image 1 and image 3 are determined as the first target image of the moving video, and image 8 and image 9 are determined.
  • Determining to be the second target image of the moving video determining that the image 15, the image 16 and the image 17 are the third target image of the moving video, and determining that the image 22, the image 23 and the image 24 are the fourth target of the moving video Image, determining that image 30 and image 31 are the fifth target image of the moving video, determining image 43, image 44 and image 45 as the sixth target image of the moving video, and determining image 49 and image 50 as the moving video
  • the seventh target image Determining to be the second target image of the moving video, determining that the image 15, the image 16 and the image 17 are the third target image of the moving video, and determining that the image 22, the image 23 and the image 24 are the fourth target of the moving video Image, determining that image 30 and image 31 are the fifth target image of the moving video, determining image 43, image 44 and image 45 as the sixth target image of the moving video, and determining image 49 and image 50 as the moving video
  • the seventh target image Determining to be the second target image of the moving video, determining that the image 15, the
  • the server decomposes the mobile video sent by the video collection device based on the multiple target images, and the operation of obtaining the plurality of candidate path navigation videos may be: performing character recognition on the plurality of target images separately, and obtaining multiple key text information.
  • the server separately performs character recognition on the plurality of target images to obtain a plurality of key reference object information; and based on the plurality of key reference object information, decomposing the mobile video sent by the video collection device to obtain a plurality of candidate path navigation videos,
  • the plurality of candidate path navigation videos include start point candidate reference information and end point candidate reference information, and the end candidate candidate reference information and the second candidate path navigation video of the first candidate path navigation video in the plurality of candidate path navigation videos
  • the first candidate path navigation video and the second candidate path navigation video are any candidate path navigation video of the plurality of candidate path navigation videos, and the second candidate path navigation video is the first candidate The next candidate path navigation video adjacent to the path navigation video.
  • the server may store the multiple candidate path navigation videos based on the geographic location information of the video collection device, where the server may store the geographic location information of the video collection device and the multiple candidate path navigation videos. The correspondence between the geographical location information and the candidate path navigation video.
  • the plurality of target images identified by the server from the mobile video 1 are image 1, image 8, image 16, image 23, image 30, image 44, and image 49; respectively, the server performs text recognition on the plurality of images, respectively
  • the key text information of image 1 is A
  • the key text information of image 8 is B
  • the key text information of image 16 is C
  • the key text information of image 23 is D
  • the key text information of image 30 is E
  • the key text of image 44 is
  • the key text information of the information F and the image 49 is G.
  • the plurality of candidate path navigation videos decomposed by the mobile video 1 are navigation video 1, navigation video 2, navigation video 3, navigation video. 4.
  • Navigation video 5 and navigation video 6, and the starting text information of navigation video 1 is A
  • the ending text information is B
  • the starting text information of navigation video 2 is B
  • the ending text information is C
  • the end point text information is D
  • the starting point text information of navigation video 4 is D
  • the ending text information is E
  • the starting point text information of navigation video 5 is E.
  • the text information of the end point is F
  • the text information of the starting point of the navigation video 6 is F
  • the text information of the ending point is G
  • the geographical location information of the video collecting device is the geographical location information 1, after which the server can set the geographical location information 1
  • the plurality of candidate path navigation videos are stored in a correspondence relationship between the geographical location information and the candidate path navigation video shown in Table 1 below.
  • the target device may be any one of the multiple video capture devices.
  • the target device may also be other than the multiple video capture devices, that is, the target device may be a video capture device.
  • the target device may also be a device other than the video capture device, which is not specifically limited in this embodiment of the present disclosure.
  • the target device may also be a video capture device
  • the target device may also acquire mobile video and location information; send the mobile video and the location information to the server, so that the server associates the mobile video with the location information.
  • the server may further decompose the mobile video into a plurality of candidate path navigation videos, and store the plurality of candidate path navigation videos.
  • the target device may also acquire the mobile video and the location information, associate the mobile video with the location information, obtain the candidate path navigation video, and send the candidate path navigation video to the server.
  • the server needs to identify a plurality of target images from the mobile video sent by the video capture device, and based on the plurality of target images, the mobile video is decomposed, and the text marks in the plurality of target images are used to identify the indoor location. Therefore, when the target device records the moving video on the walking route, it is necessary to have a stay at the position where the reference object information or the text information exists, thereby forming a plurality of target images in the moving video.
  • the staying time of the target device at the location of the reference information or the text information may be determined by the user, and the dwell time may be 1 second, 2 seconds, etc., which is not specifically limited in the embodiment of the present disclosure.
  • the video image included in the mobile video transmitted by the video capture device or the target device is an image of the indoor location, and thus the video image included in the stored candidate path navigation video is also an image of the indoor location.
  • the method for the server to perform character recognition on the plurality of target images may refer to related technologies, which is not described in detail in the embodiments of the present disclosure.
  • the server stores the candidate path based on the correspondence between the geographic location information and the candidate path navigation video.
  • the navigation video can accurately match the candidate path navigation video and the corresponding geographic location information, thereby improving indoor navigation efficiency and accuracy; and the plurality of video collection devices send the captured mobile video and the geographical location information of the mobile video.
  • the server can update and store multiple candidate path navigation videos in time, further improving the navigation accuracy.
  • step 303 the server sends a target path navigation video to the target device.
  • the target path navigation video may be captured from a candidate path navigation video, or may be composed of at least one candidate path navigation video, and when the target path navigation video is composed of at least one candidate path navigation video.
  • the server may determine a path sequence of the at least one candidate path navigation video based on the start point candidate text information and the end point candidate text information of the at least one candidate path navigation video, and send the path sequence and the at least one candidate path navigation video to the Target device.
  • the server may further send the at least one candidate path navigation video to the target device based on the path sequence, so that the target device determines the path sequence of the at least one candidate path navigation video based on the receiving time of the at least one candidate path navigation video.
  • the target device may further perform character recognition on the at least one candidate path navigation video, so as to determine, based on the start point candidate text information and the end point candidate text information of the at least one candidate path navigation video.
  • the server may extract start point candidate reference information and end point candidate reference information from the at least one candidate path navigation video, thereby referencing a start candidate reference based on the at least one candidate path navigation video
  • the object information and the endpoint candidate reference information are used to determine the path sequence of the at least one candidate path navigation video, which is not specifically limited in this embodiment of the present invention.
  • the server determines the path sequence of the at least one candidate path navigation video based on the start point candidate text information and the end point candidate text information of the at least one candidate path navigation video, and selects a starting point from the video set for the third candidate path navigation video.
  • the candidate path information is the same candidate path navigation video as the end candidate candidate text information of the third candidate path navigation video
  • the third candidate path navigation video is any candidate path navigation video of the at least one candidate path navigation video
  • the video set includes the And selecting at least one candidate path navigation video, except for the candidate path navigation video remaining in the third candidate path navigation video, setting a path sequence of the selected candidate path navigation video after the third candidate path navigation video, and determining a candidate other than the selected one in the video set Whether there is a candidate path navigation video outside the path navigation video, if present, the selected candidate path navigation video is used as the third candidate path navigation video, and the selected candidate path navigation video is removed from the video set to update the video set And returning, according to the updated video set, a step of executing a candidate path navigation
  • the method for determining the path order of the at least one candidate path navigation video based on the start point candidate reference object information and the end point candidate reference object information of the at least one candidate path navigation video is the same as the method for determining the path order based on the text information.
  • the disclosed embodiments are not described in detail herein.
  • the at least one candidate path navigation video is the navigation video 21, the navigation video 23, and the navigation video 22.
  • the start point candidate text information of the navigation video 21 is A
  • the end point candidate text information is B
  • the start point candidate text information of the navigation video 22 is D
  • the destination candidate text information is F
  • the start candidate text information of the navigation video 23 is B
  • the destination candidate text information is D.
  • the third candidate path navigation video is the navigation video 21, the navigation video 22 and the navigation video 23 constitute a video collection. From the video set, the candidate path navigation video that selects the same end point candidate text information as the end point candidate text information B of the navigation video 21 is the navigation video 23, and determines that there is a candidate path navigation video in addition to the navigation video 23 in the video set.
  • the navigation video 23 is used as the third candidate path navigation video, and the navigation video 23 is removed from the video collection to obtain an updated video collection. From the updated video collection, the end point candidate text information and the end point of the navigation video 23 are selected.
  • the candidate path navigation video with the same candidate text information D is the navigation video 22, and the update is determined.
  • the candidate video navigation video does not exist except for the navigation video 22 in the subsequent video collection. Therefore, the path sequence of the at least one candidate path navigation video is determined to be the navigation video 21, the navigation video 23, and the navigation video 22.
  • the server may determine multiple navigation path videos, and then select a default navigation path video from the multiple navigation path videos, and determine the default navigation path video as the target.
  • the path navigation is sent to the target device, so that the target device navigates based on the target path navigation video.
  • the server may also send the plurality of navigation path videos to the target device separately, so that the user selects a navigation path video from the plurality of navigation path videos for navigation, which is not specifically limited in the embodiment of the disclosure.
  • step 304 when the target device receives the target path navigation video sent from the start point to the end point sent by the server, the target path navigation video is played in response to the received navigation trigger operation.
  • the target device When the target device receives the target path navigation video sent by the server from the start point to the end point, if the target device receives the navigation trigger operation, the target device may trigger the operation in response to the navigation and play the target path navigation video.
  • the navigation triggering operation may be triggered by the user, which is not specifically limited in this embodiment of the present disclosure.
  • the target device when the target device plays the target path navigation video, the target device can also detect the current motion speed, and then play the target path navigation video based on the current motion speed, so that the playback speed of the target path navigation video is equal to the current motion speed. To improve navigation.
  • the target device may And playing, according to the path sequence of the at least one candidate path navigation video, the at least one candidate path navigation video; for each candidate path navigation video in the at least one candidate path navigation video, when playing the candidate path navigation video and the location information
  • the route confirmation prompt information is displayed, and the route confirmation prompt information is used to prompt the user to confirm whether the navigation path deviates.
  • the location information is corresponding to the target image collected by the video capture device during the process of collecting the mobile video.
  • the route re-planning request is sent to the server.
  • the server receives the path re-planning request sent by the target device, the server re-plans the request to acquire the new target path navigation video, and sends a new target path navigation video to the target device, so that the target device navigates based on the new target path navigation video.
  • each candidate path navigation video is also associated with the location information, and therefore, for each of the at least one candidate path navigation video
  • the candidate path navigation video when the target device plays the position associated with the location information in the candidate path navigation video, the target device may pause the playback of the candidate path navigation video and display the route confirmation prompt information.
  • the target device may determine that the navigation path currently being navigated does not match the expected path of the user, and the target device may re-plan the navigation path through the server. Further, when the target device receives the confirmation instruction based on the route confirmation prompt information, the target device determines that the navigation path currently being navigated matches the expected path of the user, and the target device may continue to play the candidate path navigation video, and if currently played, The candidate path navigation video is the last candidate path navigation video in the at least one candidate path navigation video, and then the candidate path navigation video is stopped.
  • the route confirmation prompt information may be a text information, a voice information, or a combination of the text information and the voice information, which is not specifically limited in the embodiment of the present disclosure.
  • the route re-planning instruction and the confirmation instruction may be triggered by a user, and the user may trigger the operation by a specified operation, which is not specifically limited in the embodiment of the disclosure.
  • the server may re-select the target path navigation video based on the new starting point information and the ending point information;
  • the target path navigation video is sent to the target device, causing the target device to navigate based on the reselected target path navigation video.
  • the location where the target device sends the route re-planning request may be the same as or different from the location where the original starting point information is located. Therefore, the route re-planning request sent by the target device to the server may be The new geographic location information may not be carried, and of course, the new geographic location information may be carried, and the new geographic location information is the geographical location information when the target device sends the route re-planning request. If the route rescheduling request does not carry the new geographic location information, the server may reselect the target path navigation video from the plurality of candidate path navigation videos acquired in step 302.
  • the server may re-acquire the multiple candidate path navigation videos corresponding to the new geographic location information from the stored multiple candidate path navigation videos based on the new geographic location information, and further Re-acquire multiple candidate path navigation videos and select the target path navigation video.
  • the video when the outdoor navigation is performed, the video is navigated through the target path, the navigation can be more intuitively navigated, and the navigation threshold is lowered, and when indoor navigation is performed, the reference information or text information naturally existing in the indoor location is adopted.
  • the target path navigation video on the target path is determined for navigation, thereby eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and Save a lot of physical equipment and labor.
  • the user can determine whether the target path of the current navigation is deviated by confirming the target path navigation video and the actual path in real time, and when the deviation occurs, the target path can be re-determined.
  • Navigation videos improve the accuracy of navigation.
  • FIG. 4 is a block diagram of a navigation device, according to an exemplary embodiment. As shown in FIG. 4, the device includes a first receiving module 401, a first obtaining module 402, and a first sending module 403.
  • the first receiving module 401 is configured to receive start point information and end point information sent by the target device;
  • the first obtaining module 402 is configured to acquire, according to the starting point information received by the first receiving module 401 and the destination information, a target path navigation video from the starting point to the ending point;
  • the first sending module 403 is configured to send, to the target device, the target path navigation video acquired by the first acquiring module 402.
  • the first obtaining module 402 includes a first acquiring unit 4021.
  • the first obtaining unit 4021 is configured to acquire the target path navigation video based on the start position information and the end position information, where the start point information includes the start position information, and the end point information includes the end position information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • the first obtaining unit 4021 includes a first extracting subunit 40211, a first determining subunit 40212, and a first obtaining subunit 40213;
  • a first extraction subunit 40211 configured to extract starting point reference information from the starting environment image, and extract end point reference information from the end environment image;
  • a first determining sub-unit 40212 configured to determine the starting point reference information extracted by the first extracting sub-unit 40211 as starting point position information, and determine the end point reference object information extracted by the extracting sub-unit 40211 as end point position information;
  • the first obtaining sub-unit 40213 is configured to obtain the target path navigation video based on the starting reference object information determined by the first determining sub-unit 40212 and the end point reference information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • the first obtaining unit 4021 includes a second extracting subunit 40214, a second determining subunit 40215, and a second obtaining subunit 40216;
  • a second extracting sub-unit 40214 configured to extract starting point text information from the starting point environment image, and extract end point text information from the end point environment image;
  • a second determining sub-unit 40215 configured to determine the starting point text information extracted by the second extracting sub-unit 40214 as starting point position information, and determine the ending point text information extracted by the second extracting sub-unit as end point position information;
  • the second obtaining sub-unit 40216 is configured to obtain the target path navigation video based on the starting text information and the ending text information determined by the second determining sub-unit 40215.
  • the first acquisition module 402 includes an intercept unit 4022.
  • the intercepting unit 4022 is configured to intercept the target path navigation video from the stored one candidate path navigation video based on the start point information and the end point information.
  • the first obtaining module 402 includes a second acquiring unit 4023 .
  • the second obtaining unit 4023 is configured to guide, from the stored multiple candidate paths, based on the start point information and the end point information. In the navigation video, obtain the target path navigation video.
  • the apparatus further includes a second acquisition module 404.
  • the second obtaining module 404 is configured to acquire a candidate path navigation video.
  • the second obtaining module 404 further includes a third obtaining unit 4041 and an associating unit 4042.
  • the third obtaining unit 4041 is configured to acquire a mobile video and location information, where the location information is location information corresponding to the target image that is collected when the video capture device is in a static state during the process of acquiring the mobile video;
  • the association unit 4042 is configured to associate the location information acquired by the third acquiring unit with the target image to obtain a candidate path navigation video.
  • the location information includes reference information or text information.
  • the apparatus further includes a second receiving module 405, a third obtaining module 406, and a second sending module 407.
  • a second receiving module 405, configured to receive a path re-planning request sent by the target device
  • a third obtaining module 406, configured to acquire a new target path navigation video based on the path re-planning request received by the second receiving module 405;
  • the second sending module 407 is configured to send the new target path navigation video acquired by the third obtaining module 406 to the target device, so that the target device navigates based on the new target path navigation video.
  • the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that The target device navigates based on the target path navigation video, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a large amount of physical equipment and labor.
  • FIG. 13 is a block diagram of a navigation device, according to an exemplary embodiment. As shown in FIG. 13, the apparatus includes a first obtaining module 1301, a first sending module 1302, a receiving module 1303, and a playing module 1304.
  • the first obtaining module 1301 is configured to acquire start point information and end point information
  • the first sending module 1302 is configured to send the start point information and the end point information acquired by the first obtaining module 1301 to the server;
  • the receiving module 1303 is configured to receive a target path navigation video sent by the server from the start point to the end point, where the target path navigation video is obtained by the server according to the first sending module 1302 sending the start point information and the end point information;
  • the playing module 1304 is configured to play the target path navigation video received by the receiving module 1303 in response to the received navigation triggering operation.
  • the first obtaining module 1301 includes an obtaining unit 13011.
  • the obtaining unit 13011 is configured to acquire a start environment image and an end environment image when receiving the navigation instruction.
  • the play module 1304 includes a detecting unit 13041 and a playing unit 13042.
  • a detecting unit 13041 configured to detect a current motion speed
  • the playing unit 13042 is configured to play the target path navigation video based on the motion speed detected by the detecting unit, so that the playing speed of the target path navigation video is equal to the moving speed.
  • the play module 1304 includes a display unit 13043 and a transmitting unit 13044.
  • a display unit 13043 configured to display a route confirmation prompt information, when the target image position in the target path navigation video is played, the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated;
  • the sending unit 13044 is configured to, when receiving the route re-planning instruction based on the route confirmation prompt information displayed by the display unit, send a route re-planning request to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
  • the apparatus further includes a second obtaining module 1305 and a second sending module 1306.
  • a second obtaining module 1305, configured to acquire mobile video and location information
  • the second sending module 1306 is configured to send the mobile video and the location information acquired by the second acquiring module to the server, so that the server associates the mobile video with the target image, where the location information is in the process of collecting the mobile video.
  • the position information corresponding to the target image acquired when it is at rest.
  • the apparatus further includes a third obtaining module 1307, an associating module 1308, and a third sending module 1309.
  • a third obtaining module 1307 configured to acquire mobile video and location information
  • the association module 1308 is configured to associate the mobile video acquired by the third obtaining module with the target image to obtain a candidate path navigation video, where the location information is location information corresponding to the target image collected when the mobile video is in a stationary state. ;
  • the third sending module 1309 is configured to send the candidate path navigation video associated with the associated module to the server.
  • the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that The target device navigates based on the target path navigation video, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a large amount of physical equipment and labor.
  • FIG. 19 is a block diagram of an apparatus 1900 for navigation, according to an exemplary embodiment.
  • device 1900 can be provided as a server.
  • apparatus 1900 includes a processing component 1922 that further includes one or more processors, and memory resources represented by memory 1932 for storing instructions executable by processing component 1922, such as an application.
  • An application stored in memory 1932 can include one or more modules each corresponding to a set of instructions.
  • Apparatus 1900 can also include a power supply component 1926 configured to perform power management of apparatus 1900, a wired or wireless network interface 1950 configured to connect apparatus 1900 to the network, and an input/output (I/O) interface 1958.
  • Apparatus 1900 may operate based on an operating system stored in the memory 1932, for example, Windows Server TM, Mac OS X TM , Unix TM, Linux TM, FreeBSD TM or the like.
  • processing component 1922 is configured to execute instructions to perform the method of navigation described below, the method comprising:
  • the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
  • the target path navigation video is acquired based on the start position information and the end position information, and the start point information includes the start position information, and the end point information includes the end position information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • Obtaining the target path navigation video based on the start position information and the end position information including:
  • the target path navigation video is acquired based on the starting point reference information and the end point reference information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • Obtaining the target path navigation video based on the start position information and the end position information including:
  • the target path navigation video is acquired based on the start point text information and the end point text information.
  • the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
  • the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
  • the method before acquiring the target path navigation video from the start point to the end point based on the start point information and the end point information, the method further includes:
  • acquiring a candidate path navigation video includes:
  • the location information is location information corresponding to the target image collected by the video capture device during the process of acquiring the mobile video
  • the location information is associated with the target image to obtain a candidate path navigation video.
  • the location information includes reference information or text information.
  • the method further includes:
  • the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
  • the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
  • FIG. 20 is a block diagram of an apparatus 2000 for navigation, according to an exemplary embodiment.
  • device 2000 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, smart glasses, a smart watch, and the like.
  • apparatus 2000 may include one or more of the following components: processing component 2002, memory 2004, power component 2006, multimedia component 2008, audio component 2010, input/output (I/O) interface 2012, sensor component 2014, And communication component 2016.
  • Processing component 2002 typically controls the overall operation of device 2000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 2002 may include one or more processors 2020 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 2002 can include one or more modules to facilitate interaction between component 2002 and other components.
  • processing component 2002 can include a multimedia module to facilitate interaction between multimedia component 2008 and processing component 2002.
  • the memory 2004 is configured to store various types of data to support operation at the device 2000. Examples of such data include instructions for any application or method operating on device 2000, contact data, phone book data, messages, pictures, videos, and the like.
  • Memory 2004 can be any type of volatile or non-volatile storage device or it Their combined implementations, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable programmable read only memory (EPROM), programmable read only memory (PROM), Read memory (ROM), magnetic memory, flash memory, disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable programmable read only memory
  • PROM programmable read only memory
  • ROM Read memory
  • magnetic memory magnetic memory
  • flash memory disk or optical disk.
  • Power component 2006 provides power to various components of device 2000.
  • Power component 2006 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 2000.
  • the multimedia component 2008 includes a screen between the device 2000 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 2008 includes a front camera and/or a rear camera. When the device 2000 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 2010 is configured to output and/or input audio signals.
  • audio component 2010 includes a microphone (MIC) that is configured to receive an external audio signal when device 2000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 2004 or transmitted via communication component 2016.
  • the audio component 2010 also includes a speaker for outputting an audio signal.
  • the I/O interface 2012 provides an interface between the processing component 2002 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • the sensor assembly 2014 includes one or more sensors for providing a status assessment of various aspects to the device 2000.
  • sensor assembly 2014 can detect an open/closed state of device 2000, a relative positioning of components, such as the display and keypad of device 2000, and sensor component 2014 can also detect a change in position of one component of device 2000 or device 2000. The presence or absence of contact by the user with the device 2000, the orientation or acceleration/deceleration of the device 2000 and the temperature change of the device 2000.
  • the sensor assembly 2014 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 2014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 2014 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 2016 is configured to facilitate wired or wireless communication between device 2000 and other devices.
  • the device 2000 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 2016 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 2016 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 2000 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 2004 comprising instructions executable by processor 2020 of apparatus 2000 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of a target device, enabling the target device to perform a navigation method, the method comprising:
  • the target path navigation video is obtained by the server based on the start point information and the end point information.
  • the start point information includes a start point environment image
  • the end point information includes an end point environment image
  • Get start and end points including:
  • the start environment image and the end environment image are acquired.
  • playing the target path navigation video includes:
  • the target path navigation video is played such that the playback speed of the target path navigation video is equal to the motion speed.
  • playing the target path navigation video includes:
  • the route confirmation prompt information is displayed, and the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated, and the location information is that the video collection device is collecting mobile.
  • a route re-planning request is sent to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
  • the method further includes:
  • the mobile video and the location information are sent to the server, causing the server to associate the mobile video with the location information.
  • the method further includes:
  • the candidate path navigation video is sent to the server.
  • the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
  • the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Navigation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Environmental Sciences (AREA)
  • Instructional Devices (AREA)

Abstract

一种导航方法及装置,属于导航技术领域。该导航方法包括:接收目标设备发送的起点信息和终点信息(101);基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频(102);向目标设备发送该目标路径导航视频(103)。该导航方法通过实时播放目标路径导航视频,使用户可以实时判断目标路径与实际路线是否偏离,提高了导航的准确度。

Description

导航方法及装置
本申请基于申请号为201510634512.8、申请日为2015年9月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及导航技术领域,尤其涉及一种导航方法及装置。
背景技术
目前,随着现代化城市的迅速发展,购物中心等场所越来越大型化,用户仅凭设置的指示牌或者地图很难快速找到目的地,因此,为了便于用户快速地找到目的地,亟需一种导航方法。
相关技术中,对于室外导航,基本都是通过地图和定位信息进行导航;而对于室内导航,通常都是人工事先建立红外线感应装置,之后,通过事先建立的红外线感应装置可以定位用户当前所处的位置,基于用户的起点位置和终点位置,确定导航路径,并基于用户当前所处的位置和该导航路径进行导航。
发明内容
为克服相关技术中存在的问题,本公开提供一种导航方法及装置。
根据本公开实施例的第一方面,提供了一种导航方法,所述方法包括:
接收目标设备发送的起点信息和终点信息;
基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;
向所述目标设备发送所述目标路径导航视频。
结合第一方面,在上述第一方面的第一种可能的实现方式中,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于起点位置信息和终点位置信息获取所述目标路径导航视频,所述起点信息包括所述起点位置信息,所述终点信息包括所述终点位置信息。
结合第一方面的第一种可能的实现方式,在上述第一方面的第二种可能的实现方式中,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
所述基于起点位置信息和终点位置信息获取所述目标路径导航视频,包括:
从所述起点环境图像中提取起点参照物信息,并从所述终点环境图像中提取终点参照物信息;
将所述起点参照物信息确定为起点位置信息,并将所述终点参照物信息确定为终点位 置信息;
基于所述起点参照物信息和所述终点参照物信息,获取所述目标路径导航视频。
结合第一方面的第一种可能的实现方式,在上述第一方面的第三种可能的实现方式中,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
所述基于起点位置信息和终点位置信息获取所述目标路径导航视频,包括:
从所述起点环境图像中提取起点文字信息,并从所述终点环境图像中提取终点文字信息;
将所述起点文字信息确定为起点位置信息,并将所述终点文字信息确定为终点位置信息;
基于所述起点文字信息和所述终点文字信息,获取所述目标路径导航视频。
结合第一方面,在上述第一方面的第四种可能的实现方式中,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于所述起点信息和所述终点信息,从存储的一个候选路径导航视频中截取所述目标路径导航视频。
结合第一方面,在上述第一方面的第五种可能的实现方式中,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于所述起点信息和所述终点信息,从存储的多个候选路径导航视频中,获取所述目标路径导航视频。
结合第一方面至第一方面的第五种可能实现方式中的任一种可能的实现方式,在上述第一方面的第六种可能的实现方式中,在所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频之前,所述方法还包括:
获取候选路径导航视频。
结合第一方面第六种可能的实现方式,在上述第一方面的第七种可能的实现方式中,所述获取候选路径导航视频,包括:
获取移动视频和位置信息,所述位置信息为视频采集设备在采集所述移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
将所述位置信息与所述目标图像进行关联,得到候选路径导航视频。
结合第一方面第七种可能的实现方式,在上述第一方面的第八种可能的实现方式中,所述位置信息包括参照物信息或文字信息。
结合第一方面,在上述第一方面的第九种可能的实现方式中,在所述向所述目标设备发送所述目标路径导航视频之后,所述方法还包括:
接收所述目标设备发送的路径重新规划请求;
基于所述路径重新规划请求获取新目标路径导航视频;
向所述目标设备发送所述新目标路径导航视频,使所述目标设备基于所述新目标路径导航视频进行导航。
根据本公开实施例的第二方面,提供了一种导航方法,所述方法包括:
获取起点信息和终点信息;
向服务器发送所述起点信息和所述终点信息;
接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述起点信息和所述终点信息获取得到;
响应于接收到的导航触发操作,播放所述目标路径导航视频。
结合第二方面,在上述第二方面的第一种可能的实现方式中,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
所述获取起点信息和终点信息,包括:
当接收到导航指令时,获取起点环境图像和终点环境图像。
结合第二方面,在上述第二方面的第二种可能的实现方式中,所述播放所述目标路径导航视频,包括:
检测当前的运动速度;
基于所述运动速度,播放所述目标路径导航视频,使所述目标路径导航视频的播放速度与所述运动速度相等。
结合第二方面,在上述第二方面的第三种可能的实现方式中,所述播放所述目标路径导航视频,包括:
当播放到所述目标路径导航视频中目标图像位置处时,显示路线确认提示信息,所述路线确认提示信息用于提示用户确认所述目标路径是否偏离;
当基于所述路线确认提示信息接收到路线重新规划指令时,向所述服务器发送路线重新规划请求,使所述服务器基于所述路径重新规划请求获取新目标路径导航视频。
结合第二方面至第二方面的第三种可能的实现方式,在上述第二方面的第四种可能的实现方式中,所述方法还包括:
获取移动视频和位置信息;
将所述移动视频和所述位置信息发送给所述服务器,使所述服务器将所述移动视频和目标图像进行关联,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息。
结合第二方面至第二方面的第三种可能的实现方式,在上述第二方面的第五种可能的实现方式中,所述方法还包括:
获取移动视频和位置信息;
将所述移动视频和目标图像进行关联,得到候选路径导航视频,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息;
将所述候选路径导航视频发送给所述服务器。
根据本公开实施例的第三方面,提供了一种导航装置,所述装置包括:
第一接收模块,用于接收目标设备发送的起点信息和终点信息;
第一获取模块,用于基于所述第一接收模块接收的所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;
第一发送模块,用于向所述目标设备发送所述获取模块获取的所述目标路径导航视频。
结合第三方面,在上述第三方面的第一种可能的实现方式中,所述第一获取模块包括:
第一获取单元,用于基于起点位置信息和终点位置信息获取所述目标路径导航视频,所述起点信息包括所述起点位置信息,所述终点信息包括所述终点位置信息。
结合第三方面的第一种可能的实现方式,在上述第三方面的第二种可能的实现方式中,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
所述第一获取单元包括:
第一提取子单元,用于从所述起点环境图像中提取起点参照物信息,并从所述终点环境图像中提取终点参照物信息;
第一确定子单元,用于将所述第一提取子单元提取的所述起点参照物信息确定为起点位置信息,并将所述提取子单元提取的所述终点参照物信息确定为终点位置信息;
第一获取子单元,用于基于所述第一确定子单元确定的所述起点参照物信息和所述终点参照物信息,获取所述目标路径导航视频。
结合第三方面的第一种可能的实现方式,在上述第三方面的第三种可能的实现方式中,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
所述第一获取单元包括:
第二提取子单元,用于从所述起点环境图像中提取起点文字信息,并从所述终点环境图像中提取终点文字信息;
第二确定子单元,用于将所述第二提取子单元提取的所述起点文字信息确定为起点位置信息,并将所述第二提取子单元提取的所述终点文字信息确定为终点位置信息;
第二获取子单元,用于基于所述第二确定子单元确定的所述起点文字信息和所述终点文字信息,获取所述目标路径导航视频。
结合第三方面,在上述第三方面的第四种可能的实现方式中,所述第一获取模块包括:
截取单元,用于基于所述起点信息和所述终点信息,从存储的一个候选路径导航视频中截取所述目标路径导航视频。
结合第三方面,在上述第三方面的第五种可能的实现方式中,所述第一获取模块包括:
第二获取单元,用于基于所述起点信息和所述终点信息,从存储的多个候选路径导航视频中,获取所述目标路径导航视频。
结合第三方面至第三方面的第五种可能的实现方式中的任一可能的实现方式,在上述第三方面的第六种可能的实现方式中,所述装置还包括:
第二获取模块,用于获取候选路径导航视频。
结合第三方面的第六种可能的实现方式,在上述第三方面的第七种可能的实现方式中,所述第二获取模块包括:
第三获取单元,用于获取移动视频和位置信息,所述位置信息为视频采集设备在采集所述移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
关联单元,用于将所述第三获取单元获取的所述位置信息与所述目标图像进行关联,得到候选路径导航视频。
结合第三方面的第七种可能的实现方式,在上述第三方面的第八种可能的实现方式中,所述位置信息包括参照物信息或文字信息。
结合第三方面,在上述第三方面的第九种可能的实现方式中,所述装置还包括:
第二接收模块,用于接收所述目标设备发送的路径重新规划请求;
第三获取模块,用于基于所述第二接收模块接收的所述路径重新规划请求获取新目标路径导航视频;
第二发送模块,用于向所述目标设备发送所述第三获取模块获取的所述新目标路径导航视频,使所述目标设备基于所述新目标路径导航视频进行导航。
根据本公开实施例的第四方面,提供了一种导航装置,所述装置包括:
第一获取模块,用于获取起点信息和终点信息;
第一发送模块,用于向服务器发送所述第一获取模块获取的所述起点信息和所述终点信息;
接收模块,用于接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述第一发送模块发送的所述起点信息和所述终点信息获取得到;
播放模块,用于响应于接收到的导航触发操作,播放所述接收模块接收的所述目标路径导航视频。
结合第四方面,在上述第四方面的第一种可能的实现方式中,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
所述第一获取模块包括:
获取单元,用于当接收到导航指令时,获取起点环境图像和终点环境图像。
结合第四方面,在上述第四方面的第二种可能的实现方式中,所述播放模块包括:
检测单元,用于检测当前的运动速度;
播放单元,用于基于所述检测单元检测的所述运动速度,播放所述目标路径导航视频,使所述目标路径导航视频的播放速度与所述运动速度相等。
结合第四方面,在上述第四方面的第三种可能的实现方式中,所述播放模块包括:
显示单元,用于当播放到所述目标路径导航视频中目标图像位置处时,显示路线确认 提示信息,所述路线确认提示信息用于提示用户确认所述目标路径是否偏离;
发送单元,用于当基于所述显示单元显示的所述路线确认提示信息接收到路线重新规划指令时,向所述服务器发送路线重新规划请求,使所述服务器基于所述路径重新规划请求获取新目标路径导航视频。
结合第四方面至第四方面的第三种可能的实现方式中的任一种可能的实现方式,在上述第四方面的第四种可能的实现方式中,所述装置还包括:
第二获取模块,用于获取移动视频和位置信息;
第二发送模块,用于将从第二获取模块接收的所述移动视频和所述位置信息发送给所述服务器,使所述服务器将所述移动视频和目标图像进行关联,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息。
结合第四方面至第四方面的第三种可能的实现方式中的任一种可能的实现方式,在上述第四方面的第五种可能的实现方式中,所述装置还包括:
第三获取模块,用于获取移动视频和位置信息;
关联模块,用于将所述第三获取模块获取的所述移动视频和目标图像进行关联,得到候选路径导航视频,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息;
第三发送模块,用于将所述关联模块关联的所述候选路径导航视频发送给所述服务器。
根据本公开实施例的第五方面,提供了一种导航装置,所述装置包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
接收目标设备发送的起点信息和终点信息;
基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;
向所述目标设备发送所述目标路径导航视频。
根据本公开实施例的六方面,提供了一种导航装置,所述装置包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
获取起点信息和终点信息;
向服务器发送所述起点信息和所述终点信息;
接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述起点信息和所述终点信息获取得到;
响应于接收到的导航触发操作,播放所述目标路径导航视频。
本公开的实施例提供的技术方案可以包括以下有益效果:在本公开实施例中,接收目标设备发送的起点信息和终点信息,基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频,向目标设备发送目标路径导航视频,使目标设备基于目标路径导航视频进行导航,从而更直观地进行导航,降低了导航门槛,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的一种导航方法的流程图。
图2是根据一示例性实施例示出的一种导航方法的流程图。
图3是根据一示例性实施例示出的一种导航方法的流程图。
图4是根据一示例性实施例示出的一种导航装置的框图。
图5是根据一示例性实施例示出的一种第一获取模块的框图。
图6是根据一示例性实施例示出的一种第一获取单元的框图。
图7是根据一示例性实施例示出的一种第一获取单元的框图。
图8是根据一示例性实施例示出的一种第一获取模块的框图。
图9是根据一示例性实施例示出的一种第一获取模块的框图。
图10是根据一示例性实施例示出的一种导航装置的框图。
图11是根据一示例性实施例示出的一种第二获取模块的框图。
图12是根据一示例性实施例示出的一种导航装置框图。
图13是根据一示例性实施例示出的一种导航装置框图。
图14是根据一示例性实施例示出的一种第一获取模块框图。
图15是根据一示例性实施例示出的一种播放模块框图。
图16是根据一示例性实施例示出的一种播放模块框图。
图17是根据一示例性实施例示出的一种导航装置框图。
图18是根据一示例性实施例示出的一种导航装置框图。
图19是根据一示例性实施例示出的一种用于导航的装置的框图。
图20是根据一示例性实施例示出的一种用于导航的装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图 时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种导航方法的流程图,如图1所示,该导航方法用于服务器中,包括以下步骤。
在步骤101中,接收目标设备发送的起点信息和终点信息。
在步骤102中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频。
在步骤103中,向该目标设备发送所述目标路径导航视频。
在本公开实施例中,接收目标设备发送的起点信息和终点信息,基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频,向目标设备发送目标路径导航视频,使目标设备基于目标路径导航视频进行导航,从而更直观地进行导航,降低了导航门槛,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
在本公开的另一实施例中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于起点位置信息和终点位置信息获取该目标路径导航视频,该起点信息包括该起点位置信息,该终点信息包括该终点位置信息。
在本公开的另一实施例中,起点信息包括起点环境图像,该终点信息包括终点环境图像;
基于起点位置信息和终点位置信息获取该目标路径导航视频,包括:
从该起点环境图像中提取起点参照物信息,并从该终点环境图像中提取终点参照物信息;
将该起点参照物信息确定为起点位置信息,并将该终点参照物信息确定为终点位置信息;
基于该起点参照物信息和该终点参照物信息,获取该目标路径导航视频。
在本公开的另一实施例中,起点信息包括起点环境图像,该终点信息包括终点环境图像;
基于起点位置信息和终点位置信息获取该目标路径导航视频,包括:
从该起点环境图像中提取起点文字信息,并从该终点环境图像中提取终点文字信息;
将该起点文字信息确定为起点位置信息,并将该终点文字信息确定为终点位置信息;
基于该起点文字信息和该终点文字信息,获取该目标路径导航视频。
在本公开的另一实施例中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于该起点信息和所述终点信息,从存储的一个候选路径导航视频中截取该目标路径 导航视频。
在本公开的另一实施例中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于该起点信息和该终点信息,从存储的多个候选路径导航视频中,获取该目标路径导航视频。
在本公开的另一实施例中,在基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频之前,该方法还包括:
获取候选路径导航视频。
在本公开的另一实施例中,获取候选路径导航视频,包括:
获取移动视频和位置信息,该位置信息为视频采集设备在采集该移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
将该位置信息与目标图像进行关联,得到候选路径导航视频。
在本公开的另一实施例中,该位置信息包括参照物信息或文字信息。
在本公开的另一实施例中,在向该目标设备发送该目标路径导航视频之后,该方法还包括:
接收该目标设备发送的路径重新规划请求;
基于该路径重新规划请求获取新目标路径导航视频;
向该目标设备发送该新目标路径导航视频,使该目标设备基于该新目标路径导航视频进行导航。
上述所有可选技术方案,均可按照任意结合形成本公开的可选实施例,本公开实施例对此不再一一赘述。
图2是根据一示例性实施例示出的一种导航方法的流程图,如图2所示,该导航方法用于目标设备中,包括以下步骤。
在步骤201中,获取起点信息和终点信息。
在步骤202中,向服务器发送该起点信息和该终点信息。
在步骤203中,接收该服务器发送的从起点到达终点的目标路径导航视频,该目标路径导航视频是该服务器基于该起点信息和该终点信息获取得到。
在本公开实施例中,接收目标设备发送的起点信息和终点信息,基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频,向目标设备发送目标路径导航视频,使目标设备基于目标路径导航视频进行导航,从而更直观地进行导航,降低了导航门槛,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
在本公开的另一实施例中,起点信息包括起点环境图像,该终点信息包括终点环境图像;
获取起点信息和终点信息,包括:
当接收到导航指令时,获取起点环境图像和终点环境图像。
在本公开的另一实施例中,播放该目标路径导航视频,包括:
检测当前的运动速度;
基于该运动速度,播放该目标路径导航视频,使该目标路径导航视频的播放速度与该运动速度相等。
在本公开的另一实施例中,播放该目标路径导航视频,包括:
当播放到该目标路径导航视频中与位置信息进行关联的位置时,显示路线确认提示信息,该路线确认提示信息用于提示用户确认该目标路径是否偏离,该位置信息为视频采集设备在采集移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
当基于该路线确认提示信息接收到路线重新规划指令时,向该服务器发送路线重新规划请求,使该服务器基于该路径重新规划请求获取新目标路径导航视频。
在本公开的另一实施例中,该方法还包括:
获取移动视频和位置信息;
将该移动视频和该位置信息发送给该服务器,使该服务器将该移动视频和该位置信息进行关联。
在本公开的另一实施例中,该方法还包括:
获取移动视频和位置信息;
将该移动视频和该位置信息进行关联,得到候选路径导航视频;
将该候选路径导航视频发送给该服务器。
上述所有可选技术方案,均可按照任意结合形成本公开的可选实施例,本公开实施例对此不再一一赘述。
图3是根据一示例性实施例示出的一种导航方法的流程图。如图3所示,该方法包括以下步骤。
在步骤301中,目标设备获取起点信息和终点信息,并向服务器发送该起点信息和终点信息。
在本公开实施例中,起点信息和终点信息不仅可以为文字信息、图像信息、语音信息等等,还可以为文字信息、图像信息和语音信息等信息中至少两个信息的结合,本公开实施例对此不做具体限定。
而当起点信息和终点信息为图像信息时,也即是,起点信息包括起点环境图像,终点信息包括终点环境图像,此时,当目标设备接收到导航指令时,目标设备可以获取起点环境图像和终点环境图像,从而将起点环境图像确定为起点信息,并将终点环境图像确定为终点信息,并向服务器发送该起点信息和终点信息。
其中,目标设备获取起点环境图像时,该目标设备可以对当前所在位置处的环境进行 图像拍摄,得到起点环境图像。而为了提高起点环境图像的有效利用率,目标设备对当前所在位置处的环境进行图像拍摄时,可以对当前所在位置处存在文字信息的位置或者存在参照物的位置进行图像拍摄,得到起点环境图像。其中,文字信息为目标设备当前所在位置处较为醒目的标示性文字,且该文字信息用于标识目标设备当前所在的位置,参照物可以为建筑物、公交站牌等等,本公开实施例对此不做具体限定。
而目标设备获取终点环境图像时,不仅可以通过用户直接从目标设备存储的图像库中查找,当然,目标设备也可以从服务器中获取终点环境图像,本公开实施例对此不做具体限定。
其中,当目标设备从服务器中获取终点环境图像时,目标设备可以接收用户输入的终点图像描述信息,并将该终点图像描述信息发送给服务器,当服务器接收到该终点图像描述信息时,可以从存储的图像库中,获取与该终点图像描述信息相匹配的至少一个图像,并将该至少一个图像发送给目标设备;当目标设备接收到该至少一个图像时,可以显示该至少一个图像,并当接收到指定图像的选择指令时,可以将该指定图像确定为终点环境图像,该指定图像为该至少一个图像中的任一图像。
需要说明的是,终点图像描述信息不仅可以为文字信息、语音信息等,还可以为文字信息、语音信息等信息中至少两个信息的结合,本公开实施例对此不做具体限定。另外,指定图像的选择指令用于从该至少一个图像中选择指定图像,且指定图像的选择指令可以由用户触发,该用户可以通过指定操作触发,该指定操作可以为点击操作、滑动操作、语音操作等等,本公开实施例对此不做具体限定。
还需要说明的是,当目标设备通过用户直接从目标设备存储的图像库中查找终点环境图像时,需要在目标设备中存储图像库,而当图像库中的图像较多时,占用目标设备的存储空间较大,且为了实现导航,每个需要进行导航的设备中都需要存储图像库;而当目标设备从服务器中获取终点环境图像时,由于图像库存储在服务器中,所有需要导航的设备都可以从服务器中进行获取,节省了设备的存储空间,但是设备需要与服务器进行交互,增加了交互次数和交互时间。因此,在实际应用中,可以针对不同的需求选择不同的获取方式,本公开实施例对此不做具体限定。
其中,目标设备可以为智能眼镜、智能手机、智能手表等等,本公开实施例对此不做具体限定。
另外,导航指令用于进行导航,且导航指令可以由用户触发,该用户可以通过指定操作触发,本公开实施例对此不做具体限定。再者,本公开实施例提供的导航方法不仅可以应用于室内导航,也可以应用于室外导航,本公开实施例同样对此不做具体限定。
进一步地,由于本公开实施例提供的导航方法不仅可以应用于室内导航,还可以应用于室外导航,而室内导航是对当前所在场所的室内位置进行导航,且当前所在场所的室内位置一般需要通过当前所在场所的位置得到,而当前所在场所的位置可以通过当前所在场所的地理位置信息确定,因此,为了提高室内导航的精确度,目标设备可以确定当前所在 位置的地理位置信息,进而将当前所在位置的地理位置信息发送给服务器。而室外导航是实现两个不同室外位置的导航,且该两个不同室外位置一般也是通过位置信息来确定,也即是,室外导航需要确定起点地理位置信息和终点地理位置信息,因此,为了提高室外导航的准确性,目标设备还可以确定当前所在位置的地理位置信息,将当前所在位置的地理位置信息可以确定为起点地理位置信息,并确定终点地理位置信息,进而将起点地理位置信息和终点地理位置信息发送给服务器。
其中,对于室外导航,当目标设备确定终点地理位置信息时,目标设备可以接收用户输入的终点位置描述信息,并将该终点位置描述信息发送给服务器,当服务器接收到该终点位置描述信息时,可以获取与该终点描述信息相匹配的至少一个地理位置信息,并将该至少一个地理位置信息发送给目标设备;当目标设备接收到该至少一个地理位置信息时,可以显示该至少一个地理位置信息,并当接收到指定地理位置信息的选择指令时,可以将该指定地理位置信息确定为终点地理位置信息,该指定地理位置信息为该至少一个地理位置信息中的任一地理位置信息。
需要说明的是,目标设备确定当前所在位置的地理位置信息时,可以是通过GPS(Global Positioning System,全球定位系统)定位、人工输入、或者GPS定位与人工输入相结合来确定,且该地理位置信息可以是文字信息、语音信息、或者文字信息与语音信息的结合,本公开实施例对此不做具体限定。
另外,指定地理位置信息的选择指令用于从该至少一个地理位置信息中选择指定地理位置信息,该指定地理位置信息的选择指令可以由用户触发,该用户可以通过指定操作触发,本公开实施例对此不做具体限定。
在步骤302中,当服务器接收到该起点信息和终点信息时,服务器基于该起点信息和终点信息,获取从起点到终点的目标路径导航视频。
为了更直观地进行导航,本公开实施例通过导航视频进行导航,也即是,当服务器接收到该起点信息和终点信息时,服务器需要基于该起点信息和终点信息,获取从起点到终点的目标路径导航视频。而服务器基于该起点信息和终点信息,获取从起点到终点的目标路径导航视频时,该服务器可以基于起点位置信息和终点位置信息,获取目标路径导航视频,起点信息中包括起点位置信息,终点信息中包括终点位置信息。
由于上述步骤301中提到起点信息可以包括起点环境图像,终点信息包括终点环境图像,并且起点位置信息和终点位置信息不仅可以为参照物信息,还可以为文字信息,当然,还可以为GPS信息等等,而本公开实施例以起点位置信息和终点位置信息为参照物信息或者文字信息为例进行说明,从而服务器基于起点位置信息和终点位置信息,获取目标路径导航视频的方式可以包括如下两种方式:
第一种方式、服务器从起点环境图像中提取起点参照物信息,并从终点环境图像中提取终点参照物信息,将起点参照物信息确定为起点位置信息,并将终点参照物信息确定为终点位置信息,基于该起点参照物信息和终点参照物信息,获取目标路径导航视频。
其中,服务器基于该起点参照物信息和终点参照物信息,获取目标路径导航视频时,该服务器可以基于该起点参照物信息和终点参照物信息,从存储的一个候选路径导航视频中截取目标路径导航视频。或者,该服务器可以基于起点参照物信息和终点参照物信息,从存储的多个候选路径导航视频中,获取目标路径导航视频。
而服务器基于该起点参照物信息和终点参照物信息,从存储的一个候选路径导航视频中截取目标路径导航视频的操作可以为:服务器从该候选路径导航视频中,解析出多帧视频画面,并从该多帧视频画面中分别提取一个候选参照物信息,得到多个候选参照物信息,该多个候选参照物信息中,选择与起点参照物信息相同的候选参照物信息,并将选择的候选参照物信息所在的视频画面确定为起点视频画面,以及从该多个候选参照物信息中,选择与终点参照物信息相同的候选参照物信息,并将选择的候选参照物信息所在的视频画面确定为终点视频画面,之后,该服务器从该候选路径导航视频中,截取起点视频画面与终点视频画面之间的视频,得到目标路径导航视频。
当服务器基于起点参照物信息和终点参照物信息,从存储的多个候选路径导航视频中,获取目标路径导航视频时,该多个候选路径导航视频是均包括一个起点候选参照物信息和终点候选参照物信息,因此,服务器基于起点参照物信息和终点参照物信息,从存储的多个候选路径导航视频中,选择目标路径导航视频时,该服务器可以获取该多个候选路径导航视频的起点候选参照物信息和终点候选参照物信息;基于该多个候选路径导航视频的起点候选参照物信息,从该多个候选路径导航视频中,选择起点候选参照物信息与起点参照物信息相同的候选路径导航视频;判断选择的候选路径导航视频的终点候选参照物信息与终点参照物信息是否相同;当选择的候选路径导航视频的终点候选参照物信息与终点参照物信息不相同时,将选择的候选路径导航视频的终点候选参照物信息作为起点参照物信息,返回执行基于该多个候选路径导航视频的起点候选参照物信息,从该多个候选路径导航视频中,选择起点候选参照物信息与起点参照物信息相同的候选路径导航视频的步骤,直至选择的候选路径导航视频的终点文字标识与终点参照物信息相同为止。如此,服务器可以从该多个候选路径导航视频中,选择目标路径上的至少一个候选路径导航视频,并将该至少一个候选路径导航视频组成目标路径导航视频。
第二种方式、服务器从起点环境图像中提取起点文字信息,并从终点环境图像中提取终点文字信息,将起点文字信息确定为起点位置信息,并将终点文字信息确定为终点位置信息,基于起点文字信息和终点文字信息,获取目标路径导航视频。
针对第二种方式,服务器可以对起点环境图像进行文字识别,得到起点文字信息,并对终点环境图像进行文字识别,得到终点文字信息,基于起点文字信息和终点文字信息,从存储的一个候选路径导航视频中截取目标路径导航视频。或者,服务器可以对起点环境图像进行文字识别,得到起点文字信息,并对终点环境图像进行文字识别,得到终点文字信息,基于起点文字信息和终点文字信息,从存储的多个候选路径导航视频中,获取目标路径导航视频。
而服务器基于该起点文字信息和终点文字信息,从存储的一个候选路径导航视频中截取目标路径导航视频的操作可以为:服务器从该候选路径导航视频中,解析出多帧视频画面,并从该多帧视频画面中分别提取一个候选文字信息,得到多个候选文字信息,该多个候选文字信息中,选择与起点文字信息相同的候选文字信息,并将选择的候选文字信息所在的视频画面确定为起点视频画面,以及从该多个候选文字信息中,选择与终点文字信息相同的候选文字信息,并将选择的候选文字信息所在的视频画面确定为终点视频画面,之后,该服务器从该候选路径导航视频中,截取起点视频画面与终点视频画面之间的视频,得到目标路径导航视频。
在本公开实施例中,该多个候选路径导航视频可以均包括起点候选文字信息和终点候选文字信息,因此,服务器基于起点文字信息和终点文字信息,从存储的多个候选路径导航视频中,选择目标路径导航视频时,该服务器可以获取该多个候选路径导航视频的起点候选文字信息和终点候选文字信息;基于该多个候选路径导航视频的起点候选文字信息,从该多个候选路径导航视频中,选择起点候选文字信息与起点文字信息相同的候选路径导航视频;判断选择的候选路径导航视频的终点候选文字信息与终点文字信息是否相同;当选择的候选路径导航视频的终点候选文字信息与终点文字信息不相同时,将选择的候选路径导航视频的终点候选文字信息作为起点文字信息,返回执行基于该多个候选路径导航视频的起点候选文字信息,从该多个候选路径导航视频中,选择起点候选文字信息与起点文字信息相同的候选路径导航视频的步骤,直至选择的候选路径导航视频的终点文字标识与终点文字信息相同为止。如此,服务器可以从该多个候选路径导航视频中,选择目标路径上的至少一个候选路径导航视频,并将该至少一个候选路径导航视频组成目标路径导航视频。
例如,服务器对起点环境图像进行文字识别,得到起点文字信息为A,对终点环境图像进行文字识别,得到终点文字信息为F,假如,服务器获取的多个候选路径导航视频分别为导航视频21、导航视频22、导航视频23、导航视频24和导航视频25,且导航视频21的起点候选文字信息为A、终点候选文字信息为B,导航视频22的起点候选文字信息为D、终点候选文字信息为F,导航视频23的起点候选文字信息为B、终点候选文字信息为D,导航视频24的起点候选文字信息为G、终点候选文字信息为H,导航视频25的起点候选文字信息为M、终点候选文字信息为N,服务器基于起点文字信息A,从该5个候选路径导航视频中,选择起点候选文字信息与起点文字信息相同的候选路径导航视频为导航视频21,而导航视频21的终点候选文字信息B与终点文字信息F不相同,因此,服务器将导航视频21的终点候选文字信息B作为起点文字信息,从该5个候选路径导航视频中,选择起点候选文字信息与起点文字信息相同的候选路径导航视频为导航视频23,而导航视频23的终点候选文字信息D与终点文字信息F不相同,因此,服务器将导航视频23的终点候选文字信息D作为起点文字信息,从该5个导航视频中,选择起点候选文字信息与起点文字信息相同的候选路径导航视频为导航视频22,且导航视频22的终点候选 文字信息F与终点文字信息F相同,此时,该服务器可以将导航视频21、导航视频23和导航视频22确定为目标路径上的至少一个候选路径导航视频,并基于该至少一个候选路径导航视频组成目标路径导航视频。
需要说明的是,服务器不仅可以按照上述两种方式分别获取目标路径导航视频,当然,服务器还可以将上述两种方式相结合来获取目标路径导航视频,从而提高目标路径导航视频的获取准确性。
另外,上述提到起点信息和终点信息不仅可以为文字信息、图像信息,还可以为GPS信息等等,因此,服务器不仅可以通过上述方法通过起点位置信息和终点位置信息获取目标路径导航视频,该服务器还可以基于起点信息和终点信息,从存储的一个候选路径导航视频中截取目标路径导航视频。又或者,该服务器可以基于起点信息和终点信息,从存储的多个候选路径导航视频中,获取目标路径导航视频。且服务器基于起点信息和终点信息,从存储的一个候选路径导航视频中截取目标路径导航视频的方法可以与上述方法相同,该服务器可以基于起点信息和终点信息,从存储的多个候选路径导航视频中,获取目标路径导航视频的方法也可以与上述方法相同,本公开实施例对此不再进行详细阐述。
进一步地,服务器基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频之前,该服务器还可以获取候选路径导航视频。
其中,服务器获取候选路径导航视频时,该服务器可以获取移动视频和位置信息,该位置信息为视频采集设备在采集移动视频过程中,处于静止状态时采集的目标图像对应的位置信息,之后,该服务器将该位置信息与该目标图像进行关联,得到候选路径导航视频。其中,该位置信息可以为包括参照物信息或者文字信息,当然,实际应用中,该位置信息还可以包括其他的信息,比如,地理位置信息,本公开实施例对此不做具体限定。
进一步地,当进行室外导航,且服务器接收到目标设备发送的起点地理位置信息和终点地理位置信息时,为了提高室外导航的精确度,该服务器还可以基于起点地理位置信息和终点地理位置信息,从存储的一个候选路径导航视频中,截取起点地理位置和终点地理位置之间的导航视频,并从截取的导航视频中,按照上述方法截取目标路径导航视频,此时,候选路径导航视频中会关联每个视频画面的地理位置信息。而当服务器从存储的多个候选路径导航视频中,获取目标路径导航视频时,该服务器也可以基于起点地理位置信息和终点地理位置信息,从该多个候选路径导航视频中,选择位于起点地理位置和终点地理位置之间的候选路径导航视频,并从选择的候选路径导航视频中,按照上述方法获取目标路径导航视频。
当进行室内导航时,由于该多个候选路径导航视频可以为多个场所的候选路径导航视频,也即是,该多个候选路径导航视频可以为多个地理位置信息对应的候选路径导航视频,而该多个候选路径导航视频中存储多个地理位置信息对应的候选路径导航视频时,一般是存储地理位置信息与候选路径导航视频之间的对应关系,且每个地理位置信息所在场所内可能会包括多个室内空间,因此,为了在地理位置信息所在场所内进行室内导航,且提高 室内导航的精确度,当服务器接收到目标设备当前所在位置的地理位置信息时,该服务器可以基于该地理位置信息,从地理位置信息与候选路径导航视频之间的对应关系中,获取该地理位置信息对应的多个候选路径导航视频,进而从该地理位置信息对应的多个候选路径导航视频中,获取目标路径导航视频。
进一步地,服务器从该多个候选路径导航视频中,获取该地理位置信息对应的多个候选路径导航视频之前,该服务器获取的移动视频和位置信息可以是多个视频采集设备发送的,也可以是一个视频采集设备发送的,且当该位置信息为地理位置信息时,该移动视频中每个目标图像对应的位置信息相同,因此,该服务器可以接收至少一个视频采集设备发送的移动视频和地理位置信息;对于该至少一个视频采集设备中的每个视频采集设备,服务器可以从该视频采集设备发送的移动视频中,识别出多个目标图像;基于该多个目标图像,对该视频采集设备发送的移动视频进行分解,得到多个候选路径导航视频;基于该视频采集设备的地理位置信息,将该多个候选路径导航视频进行存储。
其中,由于移动视频中可以包含多个目标图像,且每个目标图像是对存在参照物信息或者文字信息的室内位置进行拍摄得到,因此,从移动视频中识别出的多个目标图像可以区分出多个室内位置,也即是,该移动视频可以标识多个室内位置。另外,由于一条路径上可以包含多个室内位置,且不同的路径包含的室内位置可能也不同,也即是,目标设备进行导航的目标路径可能与视频采集设备上传的移动视频对应的路径不同,因此,为了组成大量的路径,尽量满足更多用户的导航需求,服务器可以基于该移动视频中的多个目标图像,对该移动视频进行分解,得到多个候选路径导航视频。
由于一个视频中可以包括多帧视频图像,且当该多帧视频图像中存在图像相同且连续的至少两帧视频图像时,可以将该图像相同且连续的至少两帧视频图像确定为目标图像,因此,服务器从该视频采集设备发送的移动视频中,识别出多个目标图像时,该服务器可以获取该移动视频包括的多帧视频图像,并将该多帧视频图像中相邻的视频图像进行比较,当该多帧视频图像中存在图像相同且连续的至少两帧视频图像时,服务器可以将该图像相同且连续的至少两帧视频图像确定为目标图像,进而从该移动视频中识别出多个目标图像。
或者,服务器还可以确定该多帧视频图像中相邻至少两帧视频图像的相似度,当相邻至少两帧视频图像的相似度大于指定相似度时,确定该相邻的至少两帧视频图像为该移动视频的目标图像。其中,指定相似度可以事先设置,比如,指定相似度可以为80%、90%等等,本公开实施例对此不做具体限定。
例如,该视频采集设备发送的移动视频为移动视频1,服务器获取该移动视频1包括的多帧视频图像分别为图像1、图像2、图像3、图像4、图像5、图像6、图像7、图像8、图像9、图像10……图像50,并将该多帧视频图像中相邻的视频图像进行比较,确定该多帧视频图像中的图像1、图像2和图像3为连续且图像相同的视频图像,确定图像8和图像9为连续且图像相同的视频图像,确定图像15、图像16和图像17为连续且图像相 同的视频图像,确定图像22、图像23和图像24为连续且图像相同的视频图像,确定图像30和图像31为连续且图像相同的视频图像,确定图像43、图像44和图像45为连续且图像相同的视频图像,以及确定图像49和图像50为连续且图像相同的视频图像,则将图像1、图像2和图像3确定为该移动视频的第一个目标图像,将图像8和图像9确定为该移动视频的第二个目标图像,确定图像15、图像16和图像17为该移动视频的第三个目标图像,确定图像22、图像23和图像24为该移动视频的第四个目标图像,确定图像30和图像31为该移动视频的第五个目标图像,确定图像43、图像44和图像45为该移动视频的第六个目标图像,以及确定图像49和图像50为该移动视频的第七个目标图像。
其中,服务器基于该多个目标图像,对视频采集设备发送的移动视频进行分解,得到多个候选路径导航视频的操作可以为:分别对该多个目标图像进行文字识别,得到多个关键文字信息;基于该多个关键文字信息,对视频采集设备发送的移动视频进行分解,得到多个候选路径导航视频,该多个候选路径导航视频中均包括起点候选文字信息和终点候选文字信息,且该多个候选路径导航视频中第一候选路径导航视频的终点候选文字信息与第二候选路径导航视频的起点候选文字信息相同,第一候选路径导航视频和第二候选路径导航视频为该多个候选路径导航视频中的任一候选路径导航视频,且第二候选路径导航视频为与第一候选路径导航视频相邻的下一个候选路径导航视频。
或者,服务器分别对该多个目标图像进行文字识别,得到多个关键参照物信息;基于该多个关键参照物信息,对视频采集设备发送的移动视频进行分解,得到多个候选路径导航视频,该多个候选路径导航视频中均包括起点候选参照物信息和终点候选参照物信息,且该多个候选路径导航视频中第一候选路径导航视频的终点候选参照物信息与第二候选路径导航视频的起点候选参照物信息相同,第一候选路径导航视频和第二候选路径导航视频为该多个候选路径导航视频中的任一候选路径导航视频,且第二候选路径导航视频为与第一候选路径导航视频相邻的下一个候选路径导航视频。
另外,服务器基于该视频采集设备的地理位置信息,将该多个候选路径导航视频进行存储的操作可以为:该服务器可以将该视频采集设备的地理位置信息和该多个候选路径导航视频存储在地理位置信息与候选路径导航视频之间的对应关系中。
例如,服务器从移动视频1中识别出的多个目标图像分别为图像1、图像8、图像16、图像23、图像30、图像44和图像49;服务器对该多个图像分别进行文字识别,得到图像1的关键文字信息为A、图像8的关键文字信息为B、图像16的关键文字信息为C、图像23的关键文字信息为D、图像30的关键文字信息为E、图像44的关键文字信息为F和图像49的关键文字信息为G,基于该多个关键文字信息,将该移动视频1分解的多个候选路径导航视频分别为导航视频1、导航视频2、导航视频3、导航视频4、导航视频5和导航视频6,且导航视频1的起点文字信息为A、终点文字信息为B,导航视频2的起点文字信息为B、终点文字信息为C,导航视频3的起点文字信息为C、终点文字信息为D,导航视频4的起点文字信息为D、终点文字信息为E,导航视频5的起点文字信息为 E、终点文字信息为F,导航视频6的起点文字信息为F、终点文字信息为G;假如,该视频采集设备的地理位置信息为地理位置信息1,之后,该服务器可以将地理位置信息1和该多个候选路径导航视频存储在如下表1所示的地理位置信息与候选路径导航视频之间的对应关系中。
表1
Figure PCTCN2015099732-appb-000001
需要说明的是,本公开实施例仅以上述表1所示的地理位置信息与候选路径导航视频之间的对应关系为例进行说明,上述表1并不对本公开实施例构成限定。
另外,目标设备可以为该多个视频采集设备中的任一设备,当然,目标设备也可以为除该多个视频采集设备之外其他的设备,也即是,目标设备可以为视频采集设备,当然,目标设备也可以为除视频采集设备之外的设备,本公开实施例对此不做具体限定。
进一步地,由于目标设备也可以是视频采集设备,因此,目标设备也可以获取移动视频和位置信息;将该移动视频和该位置信息发送给服务器,使服务器将该移动视频和位置信息进行关联。进而服务器也可以将该移动视频分解为多个候选路径导航视频,将该多个候选路径导航视频进行存储。或者,目标设备也可以获取移动视频和位置信息,将移动视频和位置信息进行关联,得到候选路径导航视频,并将该候选路径导航视频发送给服务器。
由于服务器需要从视频采集设备发送的移动视频中,识别出多个目标图像,并基于该多个目标图像,对该移动视频进行分解,且该多个目标图像中的文字标示用于标识室内位置,因此,当目标设备录制行走路线上的移动视频时,需要在存在参照物信息或者文字信息的位置处存在停留,从而在该移动视频中形成多个目标图像。其中,目标设备在参照物信息或者文字信息的位置处的停留时间可以由用户确定,且该停留时间可以为1秒、2秒等等,本公开实施例对此不做具体限定。
需要说明的是,当进行室内导航时,视频采集设备或者目标设备发送的移动视频包括的视频图像是室内位置的图像,进而使存储的候选路径导航视频包括的视频图像也是室内位置的图像。
另外,服务器对该多个目标图像进行文字识别的方法可以参考相关技术,本公开实施例对此不进行详细阐述。
再者,服务器基于地理位置信息与候选路径导航视频之间的对应关系存储该候选路径 导航视频,可以实现该候选路径导航视频与对应地理位置信息的精确匹配,提高了室内导航效率和准确度;并且多个视频采集设备将拍摄的移动视频和拍摄该移动视频所在的地理位置信息发送给服务器,因此,服务器可以及时更新存储多个候选路径导航视频,进一步提高了导航的准确度。
在步骤303中,服务器向目标设备发送目标路径导航视频。
基于上述步骤302可知,目标路径导航视频可以是从一个候选路径导航视频中截取得到,还可以是由至少一个候选路径导航视频组成,而当目标路径导航视频是由至少一个候选路径导航视频组成时,该服务器可以基于该至少一个候选路径导航视频的起点候选文字信息和终点候选文字信息,确定该至少一个候选路径导航视频的路径顺序,并将该路径顺序和该至少一个候选路径导航视频发送给目标设备。当然,服务器还可以基于该路径顺序,将该至少一个候选路径导航视频发送给目标设备,使目标设备基于该至少一个候选路径导航视频的接收时间,确定该至少一个候选路径导航视频的路径顺序。而如果服务器与目标设备之间的网络出现故障,会导致该至少一个候选路径导航视频的接收顺序与服务器确定的路径顺序不同,进而导致目标设备确定的路径顺序与服务器侧确定的路径顺序不同,因此,目标设备接收到该至少一个候选路径导航视频时,还可以对该至少一个候选路径导航视频进行文字识别,从而基于该至少一个候选路径导航视频的起点候选文字信息和终点候选文字信息,确定该至少一个候选路径导航视频的路径顺序,或者该服务器可以从该至少一个候选路径导航视频中提取起点候选参照物信息和终点候选参照物信息,从而基于该至少一个候选路径导航视频的起点候选参照物信息和终点候选参照物信息,确定该至少一个候选路径导航视频的路径顺序,本发明实施例对此不做具体限定。
其中,服务器基于该至少一个候选路径导航视频的起点候选文字信息和终点候选文字信息,确定该至少一个候选路径导航视频的路径顺序时,对于第三候选路径导航视频,从视频集合中,选择起点候选文字信息与第三候选路径导航视频的终点候选文字信息相同的候选路径导航视频,第三候选路径导航视频为该至少一个候选路径导航视频中的任一候选路径导航视频,该视频集合包括该至少一个候选路径导航视频除第三候选路径导航视频之外剩余的候选路径导航视频,设置选择的候选路径导航视频的路径顺序位于第三候选路径导航视频之后,判断该视频集合中除选择的候选路径导航视频之外是否还存在候选路径导航视频,如果存在,则将选择的候选路径导航视频作为第三候选路径导航视频,并从视频集合中去除选择的候选路径导航视频,以更新视频集合,基于更新后的视频集合,返回执行从视频集合中,选择起点候选文字信息与第三候选路径导航视频的终点候选文字信息相同的候选路径导航视频的步骤,直至该视频集合中不存在候选路径导航视频为止,从而确定该至少一个候选路径导航视频的路径顺序。
其中,服务器基于该至少一个候选路径导航视频的起点候选参照物信息和终点候选参照物信息,确定该至少一个候选路径导航视频的路径顺序的方法与上述基于文字信息确定路径顺序的方法相同,本公开实施例对此不再详细阐述。
比如,该至少一个候选路径导航视频为导航视频21、导航视频23和导航视频22,导航视频21的起点候选文字信息为A、终点候选文字信息为B,导航视频22的起点候选文字信息为D、终点候选文字信息为F,导航视频23的起点候选文字信息为B、终点候选文字信息为D,假如第三候选路径导航视频为导航视频21,则导航视频22和导航视频23构成视频集合,从该视频集合中,选择终点候选文字信息与导航视频21的终点候选文字信息B相同的候选路径导航视频为导航视频23,并确定该视频集合中除导航视频23之外还存在候选路径导航视频,则将导航视频23作为第三候选路径导航视频,并从视频集合中去除导航视频23,得到更新后的视频集合,从更新后的视频集合中,选择终点候选文字信息与导航视频23的终点候选文字信息D相同的候选路径导航视频为导航视频22,并确定更新后的视频集合中除导航视频22之外不存在候选路径导航视频,因此,确定该至少一个候选路径导航视频的路径顺序为导航视频21、导航视频23、导航视频22。
需要说明的是,服务器确定目标路径导航视频时,该服务器可以确定多条导航路径视频,进而从该多条导航路径视频中选择一条默认的导航路径视频,将该默认的导航路径视频确定为目标路径导航发送给目标设备,使目标设备基于目标路径导航视频进行导航。当然,该服务器还可以将该多条导航路径视频分别发送给目标设备,使用户从该多条导航路径视频中选择一条导航路径视频进行导航,本公开实施例对此不做具体限定。
在步骤304中,当目标设备接收到服务器发送的从起点到达终点的目标路径导航视频时,响应于接收到的导航触发操作,播放目标路径导航视频。
当目标设备接收到服务器发送的从起点到达终点的目标路径导航视频时,如果目标设备接收到导航触发操作,则目标设备可以响应于该导航触发操作,并播放目标路径导航视频。其中,导航触发操作可以由用户触发,本公开实施例对此不做具体限定。
其中,当目标设备播放目标路径导航视频时,该目标设备还可以检测当前的运动速度,进而基于当前的运动速度,播放目标路径导航视频,使目标路径导航视频的播放速度与当前的运动速度相等,从而提高导航效果。
当目标路径导航视频是由至少一个候选路径导航视频组成时,由于该至少一个候选路径导航视频存在一定的路径顺序,因此,当目标设备基于该至少一个候选路径导航视频进行导航时,目标设备可以基于该至少一个候选路径导航视频的路径顺序,播放该至少一个候选路径导航视频;对于该至少一个候选路径导航视频中的每个候选路径导航视频,当播放到该候选路径导航视频中与位置信息进行关联的位置时,显示路线确认提示信息,该路线确认提示信息用于提示用户确认导航路径是否偏离,该位置信息为视频采集设备在采集移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;当目标设备基于该路线确认提示信息接收到路线重新规划指令时,向服务器发送路线重新规划请求。当服务器接收到目标设备发送的路径重新规划请求时,服务器根据该路径重新规划请求获取新目标路径导航视频,并向目标设备发送新目标路径导航视频,使目标设备基于新目标路径导航视频进行导航。
需要说明的是,由于服务器将移动视频分解后的多个候选路径导航视频进行存储时,还将每个候选路径导航视频与位置信息进行关联,因此,对于该至少一个候选路径导航视频中的每个候选路径导航视频,当目标设备播放到该候选路径导航视频中与位置信息进行关联的位置时,目标设备可以暂停候选路径导航视频的播放,并显示路线确认提示信息。
其中,当目标设备接收到重新规划指令时,目标设备可以确定当前进行导航的导航路径与用户的期望路径不符,目标设备可以通过服务器重新规划导航路径。而进一步地,当目标设备基于该路线确认提示信息接收到确认指令时,目标设备确定当前进行导航的导航路径与用户的期望路径相符,目标设备可以继续播放候选路径导航视频,而如果当前播放的候选路径导航视频为该至少一个候选路径导航视频中的最后一个候选路径导航视频,则停止播放候选路径导航视频。
需要说明的是,该路线确认提示信息可以为文字信息、语音信息、或者文字信息与语音信息的结合,本公开实施例对此不做具体限定。另外,该路线重新规划指令和该确认指令可以由用户触发,该用户可以通过指定操作触发,本公开实施例对此不做具体限定。
进一步地,当服务器接收到目标设备发送的路线重新规划请求时,如果路线重新规划请求中携带新起点信息时,该服务器可以基于新起点信息和终点信息,重新选择目标路径导航视频;将重新选择的目标路径导航视频发送给目标设备,使目标设备基于重新选择的目标路径导航视频进行导航。
需要说明的是,当进行室内导航时,目标设备发送路线重新规划请求时所在的场所可能与原来的起点信息所在的场所相同,也可能不同,因此,目标设备向服务器发送的路线重新规划请求可以不携带新地理位置信息,当然也可以携带新地理位置信息,该新地理位置信息为目标设备发送路线重新规划请求时所处的地理位置信息。如果该路线重新规划请求不携带新地理位置信息,则服务器可以从步骤302中获取的多个候选路径导航视频中重新选择目标路径导航视频。如果该路线重新规划请求携带新地理位置信息,则服务器可以基于该新地理位置信息,从存储的多个候选路径导航视频中重新获取该新地理位置信息对应的多个候选路径导航视频,进而基于重新获取的多个候选路径导航视频,选择目标路径导航视频。
在本公开实施例中,当进行室外导航时,通过目标路径导航视频,可以更直观地进行导航,降低了导航门槛,而当进行室内导航时,采用室内场所自然存在的参照物信息或者文字信息作为标定点,通过识别导航视频中的该参照物信息或者文字信息,确定目标路径上的目标路径导航视频来进行导航,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。另外,采用目标路径导航视频进行导航的过程中,用户可以通过实时对该目标路径导航视频和实际路径的确认来判断当前进行导航的目标路径是否出现偏离,当出现偏离时,可以重新确定目标路径导航视频,提高了导航的准确度。
图4是根据一示例性实施例示出的一种导航装置框图。如图4所示,该装置包括第一接收模块401,第一获取模块402,第一发送模块403。
第一接收模块401,用于接收目标设备发送的起点信息和终点信息;
第一获取模块402,用于基于第一接收模块401接收的该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频;
第一发送模块403,用于向该目标设备发送第一获取模块402获取的目标路径导航视频。
在本公开的另一实施例中,如图5所示,第一获取模块402包括第一获取单元4021。
第一获取单元4021,用于基于起点位置信息和终点位置信息获取该目标路径导航视频,该起点信息包括该起点位置信息,该终点信息包括该终点位置信息。
在本公开的另一实施例中,如图6所示,起点信息包括起点环境图像,终点信息包括终点环境图像;
第一获取单元4021包括第一提取子单元40211,第一确定子单元40212,第一获取子单元40213;
第一提取子单元40211,用于从该起点环境图像中提取起点参照物信息,并从该终点环境图像中提取终点参照物信息;
第一确定子单元40212,用于将第一提取子单元40211提取的该起点参照物信息确定为起点位置信息,并将提取子单元40211提取的该终点参照物信息确定为终点位置信息;
第一获取子单元40213,用于基于第一确定子单元40212确定的该起点参照物信息和该终点参照物信息,获取该目标路径导航视频。
在本公开的另一实施例中,如图7所示,起点信息包括起点环境图像,终点信息包括终点环境图像;
第一获取单元4021包括第二提取子单元40214,第二确定子单元40215,第二获取子单元40216;
第二提取子单元40214,用于从该起点环境图像中提取起点文字信息,并从该终点环境图像中提取终点文字信息;
第二确定子单元40215,用于将第二提取子单元40214提取的该起点文字信息确定为起点位置信息,并将第二提取子单元提取的该终点文字信息确定为终点位置信息;
第二获取子单元40216,用于基于第二确定子单元40215确定的该起点文字信息和该终点文字信息,获取该目标路径导航视频。
在本公开的另一实施例中,如图8所示,第一获取模块402包括截取单元4022。
截取单元4022,用于基于该起点信息和该终点信息,从存储的一个候选路径导航视频中截取该目标路径导航视频。
在本公开的另一实施例中,如图9所示,该第一获取模块402包括第二获取单元4023。
第二获取单元4023,用于基于该起点信息和该终点信息,从存储的多个候选路径导 航视频中,获取该目标路径导航视频。
在本公开的另一实施例中,如图10所示,该装置还包括第二获取模块404。
第二获取模块404,用于获取候选路径导航视频。
在本公开的另一实施例中,如图11所示,该第二获取模块404还包括第三获取单元4041,关联单元4042。
第三获取单元4041,用于获取移动视频和位置信息,该位置信息为视频采集设备在采集该移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
关联单元4042,用于将第三获取单元获取的该位置信息与该目标图像进行关联,得到候选路径导航视频。
在本公开的另一实施例中,该位置信息包括参照物信息或文字信息。
在本公开的另一实施例中,如图12所示,该装置还包括第二接收模块405,第三获取模块406,第二发送模块407。
第二接收模块405,用于接收该目标设备发送的路径重新规划请求;
第三获取模块406,用于基于第二接收模块405接收的该路径重新规划请求获取新目标路径导航视频;
第二发送模块407,用于向该目标设备发送第三获取模块406获取的该新目标路径导航视频,使该目标设备基于该新目标路径导航视频进行导航。
在本发明实施例中,接收目标设备发送的起点信息和终点信息,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,向该目标设备发送该目标路径导航视频,使目标设备基于该目标路径导航视频进行导航,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图13是根据一示例性实施例示出的一种导航装置框图。如图13所示,该装置包括第一获取模块1301,第一发送模块1302,接收模块1303,播放模块1304。
第一获取模块1301,用于获取起点信息和终点信息;
第一发送模块1302,用于向服务器发送第一获取模块1301获取的该起点信息和该终点信息;
接收模块1303,用于接收该服务器发送的从起点到达终点的目标路径导航视频,该目标路径导航视频是该服务器基于第一发送模块1302发送该起点信息和该终点信息获取得到;
播放模块1304,用于响应于接收到的导航触发操作,播放接收模块1303接收的该目标路径导航视频。
在本公开的另一实施例中,如图14所示,该第一获取模块1301包括获取单元13011。
获取单元13011,用于当接收到导航指令时,获取起点环境图像和终点环境图像。
在本公开的另一实施例中,如图15所示,该播放模块1304包括检测单元13041,播放单元13042。
检测单元13041,用于检测当前的运动速度;
播放单元13042,用于基于检测单元检测的运动速度,播放该目标路径导航视频,使该目标路径导航视频的播放速度与该运动速度相等。
在本公开的另一实施例中,如图16所示,该播放模块1304包括显示单元13043,发送单元13044。
显示单元13043,用于当播放到该目标路径导航视频中目标图像位置处时,显示路线确认提示信息,该路线确认提示信息用于提示用户确认所述目标路径是否偏离;
发送单元13044,用于当基于显示单元显示的该路线确认提示信息接收到路线重新规划指令时,向该服务器发送路线重新规划请求,使该服务器基于该路径重新规划请求获取新目标路径导航视频。
在本公开的另一实施例中,如图17所示,该装置还包括第二获取模块1305,第二发送模块1306。
第二获取模块1305,用于获取移动视频和位置信息;
第二发送模块1306,用于将第二获取模块获取的该移动视频和该位置信息发送给该服务器,使该服务器将该移动视频和目标图像进行关联,该位置信息为采集移动视频过程中,处于静止状态时采集的目标图像对应的位置信息。
在本公开的另一实施例中,如图18所示,该装置还包括第三获取模块1307,关联模块1308,第三发送模块1309。
第三获取模块1307,用于获取移动视频和位置信息;
关联模块1308,用于将第三获取模块获取的该移动视频和目标图像进行关联,得到候选路径导航视频,该位置信息为采集移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
第三发送模块1309,用于将关联模块关联的该候选路径导航视频发送给该服务器。
在本发明实施例中,接收目标设备发送的起点信息和终点信息,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,向该目标设备发送该目标路径导航视频,使目标设备基于该目标路径导航视频进行导航,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图19是根据一示例性实施例示出的一种用于导航的装置1900的框图。例如,装置1900可以被提供为一服务器。参照图19,装置1900包括处理组件1922,其进一步包括 一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理部件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。装置1900还可以包括一个电源组件1926被配置为执行装置1900的电源管理,一个有线或无线网络接口1950被配置为将装置1900连接到网络,和一个输入输出(I/O)接口1958。装置1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
此外,处理组件1922被配置为执行指令,以执行下述导航的方法,该方法包括:
接收目标设备发送的起点信息和终点信息;
基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频;
向该目标设备发送所述目标路径导航视频。
在本公开的另一实施例中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于起点位置信息和终点位置信息获取该目标路径导航视频,该起点信息包括该起点位置信息,该终点信息包括该终点位置信息。
在本公开的另一实施例中,起点信息包括起点环境图像,该终点信息包括终点环境图像;
基于起点位置信息和终点位置信息获取该目标路径导航视频,包括:
从该起点环境图像中提取起点参照物信息,并从该终点环境图像中提取终点参照物信息;
将该起点参照物信息确定为起点位置信息,并将该终点参照物信息确定为终点位置信息;
基于该起点参照物信息和该终点参照物信息,获取该目标路径导航视频。
在本公开的另一实施例中,起点信息包括起点环境图像,该终点信息包括终点环境图像;
基于起点位置信息和终点位置信息获取该目标路径导航视频,包括:
从该起点环境图像中提取起点文字信息,并从该终点环境图像中提取终点文字信息;
将该起点文字信息确定为起点位置信息,并将该终点文字信息确定为终点位置信息;
基于该起点文字信息和该终点文字信息,获取该目标路径导航视频。
在本公开的另一实施例中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于该起点信息和所述终点信息,从存储的一个候选路径导航视频中截取该目标路径导航视频。
在本公开的另一实施例中,基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频,包括:
基于该起点信息和该终点信息,从存储的多个候选路径导航视频中,获取该目标路径 导航视频。
在本公开的另一实施例中,在基于该起点信息和该终点信息,获取从起点到达终点的目标路径导航视频之前,该方法还包括:
获取候选路径导航视频。
在本公开的另一实施例中,获取候选路径导航视频,包括:
获取移动视频和位置信息,该位置信息为视频采集设备在采集该移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
将该位置信息与目标图像进行关联,得到候选路径导航视频。
在本公开的另一实施例中,该位置信息包括参照物信息或文字信息。
在本公开的另一实施例中,在向该目标设备发送该目标路径导航视频之后,该方法还包括:
接收该目标设备发送的路径重新规划请求;
基于该路径重新规划请求获取新目标路径导航视频;
向该目标设备发送该新目标路径导航视频,使该目标设备基于该新目标路径导航视频进行导航。
在本公开实施例中,接收目标设备发送的起点信息和终点信息,基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频,向目标设备发送目标路径导航视频,使目标设备基于目标路径导航视频进行导航,从而更直观地进行导航,降低了导航门槛,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
图20是根据一示例性实施例示出的一种用于导航的装置2000的框图。例如,装置2000可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理、智能眼镜、智能手表等。
参照图20,装置2000可以包括以下一个或多个组件:处理组件2002,存储器2004,电源组件2006,多媒体组件2008,音频组件2010,输入/输出(I/O)的接口2012,传感器组件2014,以及通信组件2016。
处理组件2002通常控制装置2000的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理元件2002可以包括一个或多个处理器2020来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件2002可以包括一个或多个模块,便于处理组件2002和其他组件之间的交互。例如,处理部件2002可以包括多媒体模块,以方便多媒体组件2008和处理组件2002之间的交互。
存储器2004被配置为存储各种类型的数据以支持在设备2000的操作。这些数据的示例包括用于在装置2000上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器2004可以由任何类型的易失性或非易失性存储设备或者它 们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件2006为装置2000的各种组件提供电力。电力组件2006可以包括电源管理系统,一个或多个电源,及其他与为装置2000生成、管理和分配电力相关联的组件。
多媒体组件2008包括在所述装置2000和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件2008包括一个前置摄像头和/或后置摄像头。当设备2000处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件2010被配置为输出和/或输入音频信号。例如,音频组件2010包括一个麦克风(MIC),当装置2000处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器2004或经由通信组件2016发送。在一些实施例中,音频组件2010还包括一个扬声器,用于输出音频信号。
I/O接口2012为处理组件2002和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件2014包括一个或多个传感器,用于为装置2000提供各个方面的状态评估。例如,传感器组件2014可以检测到设备2000的打开/关闭状态,组件的相对定位,例如所述组件为装置2000的显示器和小键盘,传感器组件2014还可以检测装置2000或装置2000一个组件的位置改变,用户与装置2000接触的存在或不存在,装置2000方位或加速/减速和装置2000的温度变化。传感器组件2014可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件2014还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件2014还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件2016被配置为便于装置2000和其他设备之间有线或无线方式的通信。装置2000可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件2016经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件2016还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置2000可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器2004,上述指令可由装置2000的处理器2020执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由目标设备的处理器执行时,使得目标设备能够执行一种导航方法,所述方法包括:
获取起点信息和终点信息;
向服务器发送该起点信息和该终点信息;
接收该服务器发送的从起点到达终点的目标路径导航视频,该目标路径导航视频是该服务器基于该起点信息和该终点信息获取得到。
在本公开的另一实施例中,起点信息包括起点环境图像,该终点信息包括终点环境图像;
获取起点信息和终点信息,包括:
当接收到导航指令时,获取起点环境图像和终点环境图像。
在本公开的另一实施例中,播放该目标路径导航视频,包括:
检测当前的运动速度;
基于该运动速度,播放该目标路径导航视频,使该目标路径导航视频的播放速度与该运动速度相等。
在本公开的另一实施例中,播放该目标路径导航视频,包括:
当播放到该目标路径导航视频中与位置信息进行关联的位置时,显示路线确认提示信息,该路线确认提示信息用于提示用户确认该目标路径是否偏离,该位置信息为视频采集设备在采集移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
当基于该路线确认提示信息接收到路线重新规划指令时,向该服务器发送路线重新规划请求,使该服务器基于该路径重新规划请求获取新目标路径导航视频。
在本公开的另一实施例中,该方法还包括:
获取移动视频和位置信息;
将该移动视频和该位置信息发送给该服务器,使该服务器将该移动视频和该位置信息进行关联。
在本公开的另一实施例中,该方法还包括:
获取移动视频和位置信息;
将该移动视频和该位置信息进行关联,得到候选路径导航视频;
将该候选路径导航视频发送给该服务器。
在本公开实施例中,接收目标设备发送的起点信息和终点信息,基于起点信息和终点信息,获取从起点到达终点的目标路径导航视频,向目标设备发送目标路径导航视频,使目标设备基于目标路径导航视频进行导航,从而更直观地进行导航,降低了导航门槛,省去了人工建立统一的红外感应装置,通用性和适用性强,且节省了大量的物理设备和劳动力。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (34)

  1. 一种导航方法,其特征在于,所述方法包括:
    接收目标设备发送的起点信息和终点信息;
    基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;
    向所述目标设备发送所述目标路径导航视频。
  2. 如权利要求1所述的方法,其特征在于,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:
    基于起点位置信息和终点位置信息获取所述目标路径导航视频,所述起点信息包括所述起点位置信息,所述终点信息包括所述终点位置信息。
  3. 如权利要求2所述的方法,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
    所述基于起点位置信息和终点位置信息获取所述目标路径导航视频,包括:
    从所述起点环境图像中提取起点参照物信息,并从所述终点环境图像中提取终点参照物信息;
    将所述起点参照物信息确定为起点位置信息,并将所述终点参照物信息确定为终点位置信息;
    基于所述起点参照物信息和所述终点参照物信息,获取所述目标路径导航视频。
  4. 如权利要求2所述的方法,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
    所述基于起点位置信息和终点位置信息获取所述目标路径导航视频,包括:
    从所述起点环境图像中提取起点文字信息,并从所述终点环境图像中提取终点文字信息;
    将所述起点文字信息确定为起点位置信息,并将所述终点文字信息确定为终点位置信息;
    基于所述起点文字信息和所述终点文字信息,获取所述目标路径导航视频。
  5. 如权利要求1所述的方法,其特征在于,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:
    基于所述起点信息和所述终点信息,从存储的一个候选路径导航视频中截取所述目标路径导航视频。
  6. 如权利要求1所述的方法,其特征在于,所述基于所述起点信息和所述终点信息, 获取从起点到达终点的目标路径导航视频,包括:
    基于所述起点信息和所述终点信息,从存储的多个候选路径导航视频中,获取所述目标路径导航视频。
  7. 如权利要求1至6任一权利要求所述的方法,其特征在于,在所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频之前,所述方法还包括:
    获取候选路径导航视频。
  8. 如权利要求7所述的方法,其特征在于,所述获取候选路径导航视频,包括:
    获取移动视频和位置信息,所述位置信息为视频采集设备在采集所述移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
    将所述位置信息与所述目标图像进行关联,得到候选路径导航视频。
  9. 如权利要求8所述的方法,其特征在于,所述位置信息包括参照物信息或文字信息。
  10. 如权利要求1所述的方法,其特征在于,在所述向所述目标设备发送所述目标路径导航视频之后,所述方法还包括:
    接收所述目标设备发送的路径重新规划请求;
    基于所述路径重新规划请求获取新目标路径导航视频;
    向所述目标设备发送所述新目标路径导航视频,使所述目标设备基于所述新目标路径导航视频进行导航。
  11. 一种导航方法,其特征在于,所述方法包括:
    获取起点信息和终点信息;
    向服务器发送所述起点信息和所述终点信息;
    接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述起点信息和所述终点信息获取得到;
    响应于接收到的导航触发操作,播放所述目标路径导航视频。
  12. 如权利要求11所述的方法,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
    所述获取起点信息和终点信息,包括:
    当接收到导航指令时,获取起点环境图像和终点环境图像。
  13. 如权利要求11所述的方法,其特征在于,所述播放所述目标路径导航视频,包括:
    检测当前的运动速度;
    基于所述运动速度,播放所述目标路径导航视频,使所述目标路径导航视频的播放速度与所述运动速度相等。
  14. 如权利要求11所述的方法,其特征在于,所述播放所述目标路径导航视频,包括:
    当播放到所述目标路径导航视频中目标图像位置处时,显示路线确认提示信息,所述路线确认提示信息用于提示用户确认所述目标路径是否偏离;
    当基于所述路线确认提示信息接收到路线重新规划指令时,向所述服务器发送路线重新规划请求,使所述服务器基于所述路径重新规划请求获取新目标路径导航视频。
  15. 如权利要求11至14任一权利要求所述的方法,其特征在于,所述方法还包括:
    获取移动视频和位置信息;
    将所述移动视频和所述位置信息发送给所述服务器,使所述服务器将所述移动视频和目标图像进行关联,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息。
  16. 如权利要求11至14任一权利要求所述的方法,其特征在于,所述方法还包括:
    获取移动视频和位置信息;
    将所述移动视频和目标图像进行关联,得到候选路径导航视频,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息;
    将所述候选路径导航视频发送给所述服务器。
  17. 一种导航装置,其特征在于,所述装置包括:
    第一接收模块,用于接收目标设备发送的起点信息和终点信息;
    第一获取模块,用于基于所述第一接收模块接收的所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;
    第一发送模块,用于向所述目标设备发送所述获取模块获取的所述目标路径导航视频。
  18. 如权利要求17所述的装置,其特征在于,所述第一获取模块包括:
    第一获取单元,用于基于起点位置信息和终点位置信息获取所述目标路径导航视频,所述起点信息包括所述起点位置信息,所述终点信息包括所述终点位置信息。
  19. 如权利要求18所述的装置,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
    所述第一获取单元包括:
    第一提取子单元,用于从所述起点环境图像中提取起点参照物信息,并从所述终点环境图像中提取终点参照物信息;
    第一确定子单元,用于将所述第一提取子单元提取的所述起点参照物信息确定为起点位置信息,并将所述提取子单元提取的所述终点参照物信息确定为终点位置信息;
    第一获取子单元,用于基于所述第一确定子单元确定的所述起点参照物信息和所述终点参照物信息,获取所述目标路径导航视频。
  20. 如权利要求18所述的装置,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
    所述第一获取单元包括:
    第二提取子单元,用于从所述起点环境图像中提取起点文字信息,并从所述终点环境图像中提取终点文字信息;
    第二确定子单元,用于将所述第二提取子单元提取的所述起点文字信息确定为起点位置信息,并将所述第二提取子单元提取的所述终点文字信息确定为终点位置信息;
    第二获取子单元,用于基于所述第二确定子单元确定的所述起点文字信息和所述终点文字信息,获取所述目标路径导航视频。
  21. 如权利要求17要求所述的装置,其特征在于,所述第一获取模块包括:
    截取单元,用于基于所述起点信息和所述终点信息,从存储的一个候选路径导航视频中截取所述目标路径导航视频。
  22. 如权利要求17所述的装置,其特征在于,所述第一获取模块包括:
    第二获取单元,用于基于所述起点信息和所述终点信息,从存储的多个候选路径导航视频中,获取所述目标路径导航视频。
  23. 如权利要求17至22任一权利要求所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于获取候选路径导航视频。
  24. 如权利要求23所述的装置,其特征在于,所述第二获取模块包括:
    第三获取单元,用于获取移动视频和位置信息,所述位置信息为视频采集设备在采集所述移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;
    关联单元,用于将所述第三获取单元获取的所述位置信息与所述目标图像进行关联,得到候选路径导航视频。
  25. 如权利要求24所述的装置,其特征在于,所述位置信息包括参照物信息或文字信息。
  26. 如权利要求17所述的装置,其特征在于,所述装置还包括:
    第二接收模块,用于接收所述目标设备发送的路径重新规划请求;
    第三获取模块,用于基于所述第二接收模块接收的所述路径重新规划请求获取新目标路径导航视频;
    第二发送模块,用于向所述目标设备发送所述第三获取模块获取的所述新目标路径导航视频,使所述目标设备基于所述新目标路径导航视频进行导航。
  27. 一种导航装置,其特征在于,所述装置包括:
    第一获取模块,用于获取起点信息和终点信息;
    第一发送模块,用于向服务器发送所述第一获取模块获取的所述起点信息和所述终点信息;
    接收模块,用于接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述第一发送模块发送的所述起点信息和所述终点信息获取得到;
    播放模块,用于响应于接收到的导航触发操作,播放所述接收模块接收的所述目标路径导航视频。
  28. 如权利要求27所述的装置,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;
    所述第一获取模块包括:
    获取单元,用于当接收到导航指令时,获取起点环境图像和终点环境图像。
  29. 如权利要求27所述的装置,其特征在于,所述播放模块包括:
    检测单元,用于检测当前的运动速度;
    播放单元,用于基于所述检测单元检测的所述运动速度,播放所述目标路径导航视频,使所述目标路径导航视频的播放速度与所述运动速度相等。
  30. 如权利要求27所述的装置,其特征在于,所述播放模块包括:
    显示单元,用于当播放到所述目标路径导航视频中目标图像位置处时,显示路线确认 提示信息,所述路线确认提示信息用于提示用户确认所述目标路径是否偏离;
    发送单元,用于当基于所述显示单元显示的所述路线确认提示信息接收到路线重新规划指令时,向所述服务器发送路线重新规划请求,使所述服务器基于所述路径重新规划请求获取新目标路径导航视频。
  31. 如权利要求27至30任一权利要求所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于获取移动视频和位置信息;
    第二发送模块,用于将第二获取模块获取的所述移动视频和所述位置信息发送给所述服务器,使所述服务器将所述移动视频和目标图像进行关联,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息。
  32. 如权利要求27至30任一权利要求所述的装置,其特征在于,所述装置还包括:
    第三获取模块,用于获取移动视频和位置信息;
    关联模块,用于将所述第三获取模块获取的所述移动视频和目标图像进行关联,得到候选路径导航视频,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息;
    第三发送模块,用于将所述关联模块关联的所述候选路径导航视频发送给所述服务器。
  33. 一种导航装置,其特征在于,所述装置包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    接收目标设备发送的起点信息和终点信息;
    基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;
    向所述目标设备发送所述目标路径导航视频。
  34. 一种导航装置,其特征在于,所述装置包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取起点信息和终点信息;
    向服务器发送所述起点信息和所述终点信息;
    接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述起点信息和所述终点信息获取得到;
    响应于接收到的导航触发操作,播放所述目标路径导航视频。
PCT/CN2015/099732 2015-09-29 2015-12-30 导航方法及装置 WO2017054358A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020167007053A KR101870052B1 (ko) 2015-09-29 2015-12-30 네비게이션 방법, 장치, 프로그램 및 기록 매체
MX2016004100A MX368765B (es) 2015-09-29 2015-12-30 Método y dispositivo de navegación.
JP2017542265A JP6387468B2 (ja) 2015-09-29 2015-12-30 ナビゲーション方法、装置、プログラム及び記録媒体
RU2016112941A RU2636270C2 (ru) 2015-09-29 2015-12-30 Способ и устройство навигации

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510634512.8A CN105222773B (zh) 2015-09-29 2015-09-29 导航方法及装置
CN201510634512.8 2015-09-29

Publications (1)

Publication Number Publication Date
WO2017054358A1 true WO2017054358A1 (zh) 2017-04-06

Family

ID=54991863

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099732 WO2017054358A1 (zh) 2015-09-29 2015-12-30 导航方法及装置

Country Status (8)

Country Link
US (1) US10267641B2 (zh)
EP (1) EP3150964B1 (zh)
JP (1) JP6387468B2 (zh)
KR (1) KR101870052B1 (zh)
CN (1) CN105222773B (zh)
MX (1) MX368765B (zh)
RU (1) RU2636270C2 (zh)
WO (1) WO2017054358A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11709070B2 (en) * 2015-08-21 2023-07-25 Nokia Technologies Oy Location based service tools for video illustration, selection, and synchronization
CN107449439A (zh) * 2016-05-31 2017-12-08 沈阳美行科技有限公司 关联存储、同步展示行车路径和行车照片的方法及系统
CN105973227A (zh) * 2016-06-21 2016-09-28 上海磐导智能科技有限公司 可视化实景导航方法
CN107576332B (zh) * 2016-07-04 2020-08-04 百度在线网络技术(北京)有限公司 一种换乘导航的方法和装置
CN106323289A (zh) 2016-08-23 2017-01-11 北京小米移动软件有限公司 平衡车的控制方法及装置
CN106403978A (zh) 2016-09-30 2017-02-15 北京百度网讯科技有限公司 导航路线生成方法和装置
CN108020231A (zh) * 2016-10-28 2018-05-11 大辅科技(北京)有限公司 一种基于视频的地图系统及导航方法
US10172760B2 (en) * 2017-01-19 2019-01-08 Jennifer Hendrix Responsive route guidance and identification system
US10824870B2 (en) * 2017-06-29 2020-11-03 Accenture Global Solutions Limited Natural language eminence based robotic agent control
DE102019206250A1 (de) * 2019-05-01 2020-11-05 Siemens Schweiz Ag Regelung und Steuerung der Ablaufgeschwindigkeit eines Videos
CN110470293B (zh) * 2019-07-31 2021-04-02 维沃移动通信有限公司 一种导航方法及移动终端
CN110601925B (zh) * 2019-10-21 2021-07-27 秒针信息技术有限公司 一种信息筛选方法、装置、电子设备及存储介质
CN114096803A (zh) * 2019-11-06 2022-02-25 倬咏技术拓展有限公司 用于显示前往目的地的最短路径的3d视频生成
CN111009148A (zh) * 2019-12-18 2020-04-14 斑马网络技术有限公司 车辆导航方法、终端设备及服务器
WO2021226779A1 (zh) * 2020-05-11 2021-11-18 蜂图志科技控股有限公司 一种图像导航方法、装置、设备及可读存储介质
CN111896003A (zh) * 2020-07-28 2020-11-06 广州中科智巡科技有限公司 一种用于实景路径导航的方法及系统
CN112201072A (zh) * 2020-09-30 2021-01-08 姜锡忠 城市交通路径规划方法及系统
CN113012355A (zh) * 2021-03-10 2021-06-22 北京三快在线科技有限公司 一种充电宝租借方法及装置
CN113395462B (zh) * 2021-08-17 2021-12-14 腾讯科技(深圳)有限公司 导航视频生成、采集方法、装置、服务器、设备及介质
CN114370884A (zh) * 2021-12-16 2022-04-19 北京三快在线科技有限公司 导航方法及装置、电子设备及可读存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2337653A (en) * 1998-05-19 1999-11-24 Pleydell Bouverie David Archie Route calculation and display apparatus
CN101131327A (zh) * 2006-08-25 2008-02-27 联发科技股份有限公司 路线规划触发方法及路线规划装置
US20090254265A1 (en) * 2008-04-08 2009-10-08 Thimmannagari Chandra Reddy Video map technology for navigation
CN101701827A (zh) * 2009-10-20 2010-05-05 深圳市凯立德计算机系统技术有限公司 一种路径指引方法和路径指引设备
CN101719130A (zh) * 2009-11-25 2010-06-02 中兴通讯股份有限公司 街景地图的实现方法和实现系统
CN102012233A (zh) * 2009-09-08 2011-04-13 中华电信股份有限公司 街景视图动态导航系统及其方法
TW201317547A (zh) * 2011-10-18 2013-05-01 Nat Univ Chung Hsing 產生全景實境路徑預覽影片檔之方法及預覽系統
US20140181259A1 (en) * 2012-12-21 2014-06-26 Nokia Corporation Method, Apparatus, and Computer Program Product for Generating a Video Stream of A Mapped Route
CN104819723A (zh) * 2015-04-29 2015-08-05 京东方科技集团股份有限公司 一种定位方法和定位服务器
CN105222802A (zh) * 2015-09-22 2016-01-06 小米科技有限责任公司 导航、导航视频生成方法及装置

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6133853A (en) * 1998-07-30 2000-10-17 American Calcar, Inc. Personal communication and positioning system
JP3735301B2 (ja) * 2002-02-06 2006-01-18 財団法人鉄道総合技術研究所 案内システム
JP2004005493A (ja) * 2002-04-24 2004-01-08 Vehicle Information & Communication System Center 運転者支援情報送信装置及び運転者支援情報受信装置ならびに運転者支援情報提供システム
US20030210806A1 (en) * 2002-05-07 2003-11-13 Hitachi, Ltd. Navigational information service with image capturing and sharing
JP2003329462A (ja) * 2002-05-08 2003-11-19 Hitachi Ltd 映像配信装置および映像情報配信システム
GB0215217D0 (en) * 2002-06-29 2002-08-14 Spenwill Ltd Position referenced multimedia authoring and playback
JP2006521033A (ja) * 2003-03-19 2006-09-14 シンクウェア システムズ コーポレーション 移動通信端末機を用いたナビゲーションシステムおよび方法
JP3725134B2 (ja) 2003-04-14 2005-12-07 株式会社エヌ・ティ・ティ・ドコモ 移動通信システム、移動通信端末、及びプログラム。
US8220020B2 (en) * 2003-09-30 2012-07-10 Sharp Laboratories Of America, Inc. Systems and methods for enhanced display and navigation of streaming video
JP4725375B2 (ja) * 2006-03-14 2011-07-13 株式会社ケンウッド ナビゲーション装置、プログラム及び方法
KR101407210B1 (ko) * 2007-06-28 2014-06-12 엘지전자 주식회사 네비게이션을 이용한 특정 지점의 설정 방법 및 시스템
KR100952248B1 (ko) * 2007-12-26 2010-04-09 엘지전자 주식회사 이동 단말기 및 이를 이용한 내비게이션 방법
KR20090074378A (ko) * 2008-01-02 2009-07-07 삼성전자주식회사 휴대 단말기 및 그 네비게이션 기능 수행 방법
KR20090080589A (ko) * 2008-01-22 2009-07-27 (주)엠앤소프트 무선 통신을 이용한 네비게이션 장치의 위치 안내 방법 및장치
US8032296B2 (en) * 2008-04-30 2011-10-04 Verizon Patent And Licensing Inc. Method and system for providing video mapping and travel planning services
CN101655369A (zh) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 利用图像识别技术实现定位导航的系统及方法
KR101555552B1 (ko) * 2008-12-29 2015-09-24 엘지전자 주식회사 네비게이션 장치 및 그의 네비게이팅 방법
KR20110002517A (ko) * 2009-07-02 2011-01-10 주식회사 디지헤드 이동통신단말기를 이용한 내비게이션 방법, 이 기능을 수행하는 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체, 및 이 기록매체를 탑재한 이동통신단말기
US20110102637A1 (en) * 2009-11-03 2011-05-05 Sony Ericsson Mobile Communications Ab Travel videos
KR101662595B1 (ko) * 2009-11-03 2016-10-06 삼성전자주식회사 사용자 단말 장치, 경로 안내 시스템 및 그 경로 안내 방법
US8838381B1 (en) 2009-11-10 2014-09-16 Hrl Laboratories, Llc Automatic video generation for navigation and object finding
US8762041B2 (en) * 2010-06-21 2014-06-24 Blackberry Limited Method, device and system for presenting navigational information
KR101337446B1 (ko) * 2012-05-08 2013-12-10 박세진 약도를 이용한 내비게이션 시스템의 경로구축 방법
JP2014006190A (ja) * 2012-06-26 2014-01-16 Navitime Japan Co Ltd 情報処理システム、情報処理装置、サーバ、端末装置、情報処理方法および情報処理プログラム
JP6083019B2 (ja) * 2012-09-28 2017-02-22 株式会社ユピテル システム、プログラム、撮像装置、及び、ソフトウェア
JP5949435B2 (ja) * 2012-10-23 2016-07-06 株式会社Jvcケンウッド ナビゲーションシステム、映像サーバ、映像管理方法、映像管理プログラム、及び映像提示端末
US20140372841A1 (en) 2013-06-14 2014-12-18 Henner Mohr System and method for presenting a series of videos in response to a selection of a picture

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2337653A (en) * 1998-05-19 1999-11-24 Pleydell Bouverie David Archie Route calculation and display apparatus
CN101131327A (zh) * 2006-08-25 2008-02-27 联发科技股份有限公司 路线规划触发方法及路线规划装置
US20090254265A1 (en) * 2008-04-08 2009-10-08 Thimmannagari Chandra Reddy Video map technology for navigation
CN102012233A (zh) * 2009-09-08 2011-04-13 中华电信股份有限公司 街景视图动态导航系统及其方法
CN101701827A (zh) * 2009-10-20 2010-05-05 深圳市凯立德计算机系统技术有限公司 一种路径指引方法和路径指引设备
CN101719130A (zh) * 2009-11-25 2010-06-02 中兴通讯股份有限公司 街景地图的实现方法和实现系统
TW201317547A (zh) * 2011-10-18 2013-05-01 Nat Univ Chung Hsing 產生全景實境路徑預覽影片檔之方法及預覽系統
US20140181259A1 (en) * 2012-12-21 2014-06-26 Nokia Corporation Method, Apparatus, and Computer Program Product for Generating a Video Stream of A Mapped Route
CN104819723A (zh) * 2015-04-29 2015-08-05 京东方科技集团股份有限公司 一种定位方法和定位服务器
CN105222802A (zh) * 2015-09-22 2016-01-06 小米科技有限责任公司 导航、导航视频生成方法及装置

Also Published As

Publication number Publication date
EP3150964B1 (en) 2019-09-11
CN105222773A (zh) 2016-01-06
JP6387468B2 (ja) 2018-09-05
MX2016004100A (es) 2018-06-22
CN105222773B (zh) 2018-09-21
US10267641B2 (en) 2019-04-23
KR101870052B1 (ko) 2018-07-20
JP2017534888A (ja) 2017-11-24
US20170089714A1 (en) 2017-03-30
KR20170048240A (ko) 2017-05-08
EP3150964A1 (en) 2017-04-05
MX368765B (es) 2019-10-15
RU2016112941A (ru) 2017-10-11
RU2636270C2 (ru) 2017-11-21

Similar Documents

Publication Publication Date Title
WO2017054358A1 (zh) 导航方法及装置
KR101680714B1 (ko) 실시간 동영상 제공 방법, 장치, 서버, 단말기기, 프로그램 및 기록매체
WO2017032126A1 (zh) 无人机的拍摄控制方法及装置、电子设备
WO2017049796A1 (zh) 导航、导航视频生成方法及装置
EP3163569A1 (en) Method and device for controlling a smart device by voice, control device and smart device
CN106165430A (zh) 视频直播方法及装置
US10451434B2 (en) Information interaction method and device
WO2017177607A1 (zh) 障碍物定位方法、装置及系统
CN103906235A (zh) 终端定位的方法及终端
CN108108461B (zh) 确定封面图像的方法及装置
EP3026876A1 (en) Method for acquiring recommending information, terminal and server
US10111026B2 (en) Detecting method and apparatus, and storage medium
CN114009003A (zh) 图像采集方法、装置、设备及存储介质
CN112146676B (zh) 信息导航方法、装置、设备及存储介质
US20170034347A1 (en) Method and device for state notification and computer-readable storage medium
CN115552879A (zh) 锚点信息处理方法、装置、设备及存储介质
CN110673732A (zh) 场景共享方法及装置、系统、电子设备和存储介质
WO2022110801A1 (zh) 数据处理方法及装置、电子设备和存储介质
CN106354808A (zh) 图像存储方法及装置
KR20170037862A (ko) 문자열 저장방법 및 장치
CN107315590B (zh) 通知消息处理方法及装置
CN114726999B (zh) 图像采集方法、图像采集装置及存储介质
KR20090074378A (ko) 휴대 단말기 및 그 네비게이션 기능 수행 방법
CN113132531B (zh) 照片展示方法、装置及存储介质
EP4030294A1 (en) Function control method, function control apparatus, and storage medium

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20167007053

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2017542265

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2016/004100

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2016112941

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15905268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15905268

Country of ref document: EP

Kind code of ref document: A1