WO2017054358A1 - 导航方法及装置 - Google Patents
导航方法及装置 Download PDFInfo
- Publication number
- WO2017054358A1 WO2017054358A1 PCT/CN2015/099732 CN2015099732W WO2017054358A1 WO 2017054358 A1 WO2017054358 A1 WO 2017054358A1 CN 2015099732 W CN2015099732 W CN 2015099732W WO 2017054358 A1 WO2017054358 A1 WO 2017054358A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- video
- end point
- path navigation
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3647—Guidance involving output of stored or live camera images or video streams
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Definitions
- the present disclosure relates to the field of navigation technologies, and in particular, to a navigation method and apparatus.
- navigation is basically performed through maps and positioning information; for indoor navigation, an infrared sensing device is usually manually established in advance, and then the infrared sensing device established in advance can locate the user currently located. The location, based on the user's starting position and ending position, determines the navigation path and navigates based on the user's current location and the navigation path.
- the present disclosure provides a navigation method and apparatus.
- a navigation method comprising:
- the acquiring, by using the starting point information and the destination information, a target path navigation video from a start point to an end point including:
- the target path navigation video is acquired based on the start position information and the end position information, the start point information includes the start position information, and the end point information includes the end position information.
- the starting point information includes a starting point environment image
- the end point information includes an end point environment image
- the acquiring the target path navigation video based on the start position information and the end position information includes:
- the starting point information includes a starting point environment image
- the end point information includes an end point environment image
- the acquiring the target path navigation video based on the start position information and the end position information includes:
- the acquiring, by using the starting point information and the destination information, the target path navigation video from the start point to the end point including:
- the acquiring, by using the starting point information and the destination information, a target path navigation video from a start point to an end point including:
- the method further includes: before the obtaining the target path navigation video from the start point to the end point, the method further includes:
- the acquiring the candidate path navigation video includes:
- the location information is location information corresponding to the target image that is collected by the video capture device during the process of acquiring the mobile video
- the location information is associated with the target image to obtain a candidate path navigation video.
- the location information includes reference information or text information.
- the method further includes:
- a navigation method comprising:
- the target path navigation video is played in response to the received navigation triggering operation.
- the starting point information includes a starting point environment image
- the end point information includes an end point environment image
- the obtaining start point information and the end point information include:
- the start environment image and the end environment image are acquired.
- the playing the target path navigation video includes:
- the playing the target path navigation video includes:
- the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated;
- a route re-planning request is sent to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
- the method further includes:
- Transmitting the mobile video and the location information to the server causing the server to associate the mobile video with a target image, where the location information is collected during a process of collecting a mobile video while in a stationary state The location information corresponding to the target image.
- the method further includes:
- a navigation device comprising:
- a first receiving module configured to receive start point information and end point information sent by the target device
- a first acquiring module configured to acquire, according to the starting point information and the end point information received by the first receiving module, a target path navigation video from a starting point to an ending point;
- a first sending module configured to send, to the target device, the target path navigation video acquired by the acquiring module.
- the first acquiring module includes:
- a first acquiring unit configured to acquire the target path navigation video based on the start position information and the end position information, where the start point information includes the start position information, and the end point information includes the end position information.
- the starting point information includes a starting point environment image
- the end point information includes an end point environment image
- the first obtaining unit includes:
- a first extracting subunit configured to extract starting point reference information from the starting point environment image, and extract end point reference object information from the end point environment image;
- a first determining subunit configured to determine the starting point reference information extracted by the first extracting subunit as starting point position information, and determine the end point reference object information extracted by the extracting subunit as end point position information ;
- a first acquiring subunit configured to acquire the target path navigation video based on the starting reference object information and the end point reference information determined by the first determining subunit.
- the starting point information includes a starting point environment image
- the end point information includes an end point environment image
- the first obtaining unit includes:
- a second extracting subunit configured to extract starting point text information from the starting point environment image, and extract end point text information from the end point environment image
- a second determining subunit configured to determine the start point text information extracted by the second extracting subunit as starting point position information, and determine the end point text information extracted by the second extracting subunit as end point position information ;
- a second acquiring subunit configured to acquire the target path navigation video based on the start point text information and the end point text information determined by the second determining subunit.
- the first acquiring module includes:
- an intercepting unit configured to intercept the target path navigation video from a stored candidate path navigation video based on the start point information and the end point information.
- the first acquiring module includes:
- a second acquiring unit configured to acquire the target path navigation video from the stored plurality of candidate path navigation videos based on the start point information and the end point information.
- the device further includes:
- the second obtaining module is configured to acquire a candidate path navigation video.
- the second acquiring module includes:
- a third acquiring unit configured to acquire mobile video and location information, where the location information is location information corresponding to the target image that is collected when the video capture device is in a static state during the process of acquiring the mobile video;
- an association unit configured to associate the location information acquired by the third acquiring unit with the target image to obtain a candidate path navigation video.
- the location information includes reference information or text information.
- the device further includes:
- a second receiving module configured to receive a path re-planning request sent by the target device
- a third acquiring module configured to acquire a new target path navigation video based on the path re-planning request received by the second receiving module
- a second sending module configured to send the new target path navigation video acquired by the third acquiring module to the target device, so that the target device performs navigation based on the new target path navigation video.
- a navigation apparatus comprising:
- a first obtaining module configured to acquire start point information and end point information
- a first sending module configured to send, to the server, the starting point information and the end point information acquired by the first acquiring module
- a receiving module configured to receive a target path navigation video sent by the server from a start point to an end point, where the target path navigation video is obtained by the server based on the start point information and the end point information sent by the first sending module get;
- a playing module configured to play the target path navigation video received by the receiving module in response to the received navigation triggering operation.
- the starting point information includes a starting point environment image
- the end point information includes an end point environment image
- the first obtaining module includes:
- an obtaining unit configured to acquire a starting environment image and an ending environment image when receiving the navigation instruction.
- the playing module includes:
- a detecting unit configured to detect a current moving speed
- a playing unit configured to play the target path navigation video based on the motion speed detected by the detecting unit, so that a playing speed of the target path navigation video is equal to the moving speed.
- the playing module includes:
- a display unit configured to display a route confirmation when playing to a target image position in the target path navigation video a prompt information, where the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated;
- a sending unit configured to: when the route re-planning instruction is received based on the route confirmation prompt information displayed by the display unit, send a route re-planning request to the server, so that the server acquires a new request based on the path re-planning request Target path navigation video.
- the device further includes:
- a second acquiring module configured to acquire mobile video and location information
- a second sending module configured to send the mobile video and the location information received from the second acquiring module to the server, so that the server associates the mobile video with a target image, where the location information is The position information corresponding to the target image acquired when the camera is in a stationary state during the process of collecting the moving video.
- the device further includes:
- a third obtaining module configured to acquire mobile video and location information
- An association module configured to associate the mobile video acquired by the third acquiring module with a target image to obtain a candidate path navigation video, where the location information is the target collected when the mobile video is in a stationary state Location information corresponding to the image;
- a third sending module configured to send the candidate path navigation video associated with the association module to the server.
- a navigation apparatus comprising:
- a memory for storing processor executable instructions
- processor is configured to:
- a navigation apparatus comprising:
- a memory for storing processor executable instructions
- processor is configured to:
- the target path navigation video is played in response to the received navigation triggering operation.
- the technical solution provided by the embodiment of the present disclosure may include the following beneficial effects: in the embodiment of the present disclosure, the start point information and the end point information sent by the target device are received, and the target path navigation video from the start point to the end point is acquired based on the start point information and the end point information. Sending the target path navigation video to the target device, so that the target device navigates based on the target path navigation video, thereby more intuitively navigating, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, and having universality and applicability. And save a lot of physical equipment and labor.
- FIG. 1 is a flow chart showing a navigation method according to an exemplary embodiment.
- FIG. 2 is a flow chart showing a navigation method according to an exemplary embodiment.
- FIG. 3 is a flowchart of a navigation method according to an exemplary embodiment.
- FIG. 4 is a block diagram of a navigation device, according to an exemplary embodiment.
- FIG. 5 is a block diagram of a first acquisition module, according to an exemplary embodiment.
- FIG. 6 is a block diagram of a first acquisition unit, according to an exemplary embodiment.
- FIG. 7 is a block diagram of a first obtaining unit, according to an exemplary embodiment.
- FIG. 8 is a block diagram of a first acquisition module, according to an exemplary embodiment.
- FIG. 9 is a block diagram of a first acquisition module, according to an exemplary embodiment.
- FIG. 10 is a block diagram of a navigation device, according to an exemplary embodiment.
- FIG. 11 is a block diagram of a second acquisition module, according to an exemplary embodiment.
- FIG. 12 is a block diagram of a navigation device, according to an exemplary embodiment.
- FIG. 13 is a block diagram of a navigation device, according to an exemplary embodiment.
- FIG. 14 is a block diagram of a first acquisition module, according to an exemplary embodiment.
- FIG. 15 is a block diagram of a playback module, according to an exemplary embodiment.
- FIG. 16 is a block diagram of a playback module, according to an exemplary embodiment.
- FIG. 17 is a block diagram of a navigation device, according to an exemplary embodiment.
- FIG. 18 is a block diagram of a navigation device, according to an exemplary embodiment.
- FIG. 19 is a block diagram of an apparatus for navigation, according to an exemplary embodiment.
- FIG. 20 is a block diagram of an apparatus for navigation, according to an exemplary embodiment.
- FIG. 1 is a flowchart of a navigation method according to an exemplary embodiment. As shown in FIG. 1 , the navigation method is used in a server, and includes the following steps.
- step 101 start point information and end point information transmitted by the target device are received.
- step 102 based on the start point information and the end point information, a target path navigation video from the start point to the end point is acquired.
- step 103 the target path navigation video is sent to the target device.
- the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
- the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
- the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
- the target path navigation video is acquired based on the start position information and the end position information, and the start point information includes the start position information, and the end point information includes the end position information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- Obtaining the target path navigation video based on the start position information and the end position information including:
- the target path navigation video is acquired based on the starting point reference information and the end point reference information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- Obtaining the target path navigation video based on the start position information and the end position information including:
- the target path navigation video is acquired based on the start point text information and the end point text information.
- the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
- the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
- the method before acquiring the target path navigation video from the start point to the end point based on the start point information and the end point information, the method further includes:
- acquiring a candidate path navigation video includes:
- the location information is location information corresponding to the target image collected by the video capture device during the process of acquiring the mobile video
- the location information is associated with the target image to obtain a candidate path navigation video.
- the location information includes reference information or text information.
- the method further includes:
- FIG. 2 is a flowchart of a navigation method according to an exemplary embodiment. As shown in FIG. 2, the navigation method is used in a target device, and includes the following steps.
- step 201 start point information and end point information are acquired.
- step 202 the start point information and the end point information are sent to the server.
- step 203 the target path navigation video sent from the start point to the end point sent by the server is received, and the target path navigation video is obtained by the server based on the start point information and the end point information.
- the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
- the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- Get start and end points including:
- the start environment image and the end environment image are acquired.
- playing the target path navigation video includes:
- the target path navigation video is played such that the playback speed of the target path navigation video is equal to the motion speed.
- playing the target path navigation video includes:
- the route confirmation prompt information is displayed, and the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated, and the location information is that the video collection device is collecting mobile.
- a route re-planning request is sent to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
- the method further includes:
- the mobile video and the location information are sent to the server, causing the server to associate the mobile video with the location information.
- the method further includes:
- the candidate path navigation video is sent to the server.
- FIG. 3 is a flowchart of a navigation method according to an exemplary embodiment. As shown in FIG. 3, the method includes the following steps.
- step 301 the target device acquires the start point information and the end point information, and transmits the start point information and the end point information to the server.
- the start point information and the end point information may be not only text information, image information, voice information, and the like, but also a combination of at least two pieces of information such as text information, image information, and voice information. This example does not specifically limit this.
- the start point information and the end point information are image information, that is, the start point information includes a start point environment image, and the end point information includes an end point environment image.
- the target device may acquire the start point environment image and The end point environment image is used to determine the start point environment image as the start point information, and the end point environment image is determined as the end point information, and the start point information and the end point information are transmitted to the server.
- the target device when the target device acquires the image of the starting environment, the target device may perform the environment at the current location. Image capture, get the starting environment image.
- the image In order to improve the effective utilization of the image of the starting point environment, when the target device performs image capturing on the environment at the current location, the image may be captured at the position where the text information exists or the position where the reference object exists at the current location, and the image of the starting environment is obtained.
- the text information is a more conspicuously-characterized text at the current location of the target device, and the text information is used to identify the current location of the target device, and the reference object may be a building, a bus stop, etc. This is not specifically limited.
- the target device obtains the image of the endpoint environment
- the user can directly search for the image of the image stored by the target device.
- the target device can also obtain the image of the endpoint environment from the server.
- the target device when the target device acquires the endpoint environment image from the server, the target device may receive the endpoint image description information input by the user, and send the endpoint image description information to the server, and when the server receives the endpoint image description information, the server may Acquiring at least one image matching the end image description information and transmitting the at least one image to the target device; and when the target device receives the at least one image, displaying the at least one image, and When the selection instruction of the specified image is received, the specified image may be determined as an end point environment image, which is any one of the at least one image.
- the end point image description information may be not only text information, voice information, and the like, but also a combination of at least two pieces of information such as text information and voice information, which is not specifically limited in the embodiment of the present disclosure.
- a selection instruction of the specified image is used to select a specified image from the at least one image, and the selection instruction of the specified image may be triggered by a user, and the user may trigger by a specified operation, which may be a click operation, a sliding operation, or a voice.
- a specified operation which may be a click operation, a sliding operation, or a voice.
- the operation and the like are not specifically limited in the embodiment of the present disclosure.
- the target device searches for the endpoint environment image directly from the image library stored by the target device, the image library needs to be stored in the target device, and when the image in the image library is large, the storage of the target device is occupied.
- the space is large, and in order to achieve navigation, each of the devices that need to be navigated needs to store the image library; and when the target device obtains the image of the endpoint environment from the server, since the image library is stored in the server, all the devices that need to navigate are It can be obtained from the server, which saves the storage space of the device, but the device needs to interact with the server, increasing the number of interactions and interaction time. Therefore, in an actual application, different acquisition modes may be selected for different needs, which are not specifically limited in this embodiment of the present disclosure.
- the target device may be a smart glasses, a smart phone, a smart watch, or the like, which is not specifically limited in this embodiment of the present disclosure.
- the navigation instruction is used for navigation, and the navigation instruction may be triggered by the user, and the user may trigger the operation by a specified operation, which is not specifically limited in the embodiment of the present disclosure.
- the navigation method provided by the embodiment of the present disclosure can be applied not only to indoor navigation but also to outdoor navigation, and the embodiment of the present disclosure also does not specifically limit this.
- the navigation method provided by the embodiment of the present disclosure can be applied not only to indoor navigation but also to outdoor navigation, while the indoor navigation is to navigate the indoor location of the current location, and the indoor location of the current location generally needs to pass.
- the location of the current location is obtained, and the location of the current location can be determined by the geographical location information of the current location. Therefore, in order to improve the accuracy of the indoor navigation, the target device can determine the current location.
- the geographic location information of the location which in turn sends the geographic location information of the current location to the server.
- the outdoor navigation is to realize navigation of two different outdoor locations, and the two different outdoor locations are generally determined by location information, that is, the outdoor navigation needs to determine the starting geographical location information and the ending geographical location information, therefore, in order to improve The accuracy of the outdoor navigation, the target device can also determine the geographical location information of the current location, determine the geographical location information of the current location as the starting geographical location information, and determine the geographic location information of the destination, and then the geographic location information and the destination point Geographic location information is sent to the server.
- the target device may receive the end position description information input by the user, and send the end position description information to the server.
- the server receives the end position description information, Obtaining at least one geographic location information that matches the endpoint description information, and transmitting the at least one geographic location information to the target device; when the target device receives the at least one geographic location information, the at least one geographic location information may be displayed And when the selection instruction of the specified geographical location information is received, the specified geographical location information may be determined as the ending geographical location information, where the designated geographical location information is any one of the at least one geographic location information.
- the target device determines the geographical location information of the current location, it may be determined by GPS (Global Positioning System) positioning, manual input, or combination of GPS positioning and manual input, and the geographic location is determined.
- the information may be text information, voice information, or a combination of text information and voice information, which is not specifically limited in the embodiment of the present disclosure.
- the selection instruction of the specified geographic location information is used to select the specified geographic location information from the at least one geographic location information, where the selection instruction of the specified geographic location information may be triggered by the user, and the user may trigger by the specified operation, the embodiment of the present disclosure This is not specifically limited.
- step 302 when the server receives the start point information and the end point information, the server acquires a target path navigation video from the start point to the end point based on the start point information and the end point information.
- the embodiment of the present disclosure navigates through the navigation video, that is, when the server receives the start point information and the end point information, the server needs to acquire the target from the start point to the end point based on the start point information and the end point information.
- Path navigation video The server obtains the target path navigation video based on the start position information and the end position information based on the start point information and the end point information, and obtains the target path navigation video, and the start point information includes the start position information and the end point information.
- the end position information is included.
- the starting point information mentioned in the above step 301 may include a starting point environment image
- the end point information includes an end point environment image
- the starting point position information and the ending position information may be not only reference object information but also text information, and of course, may also be GPS information.
- the embodiment of the present disclosure uses the starting point position information and the ending position information as the reference information or the text information as an example.
- the manner in which the server obtains the target path navigation video based on the starting position information and the ending position information may include the following two. Ways:
- the server extracts the starting point reference object information from the starting point environment image, and extracts the ending point reference object information from the end point environment image, determines the starting point reference object information as the starting point position information, and determines the ending point reference object information as the ending position position.
- the information acquires a target path navigation video based on the start point reference object information and the end point reference object information.
- the server may obtain the target path navigation video based on the starting point reference object information and the end point reference object information, and the server may intercept the target path navigation from the stored one candidate path navigation video based on the starting point reference object information and the end point reference object information. video. Alternatively, the server may acquire the target path navigation video from the stored plurality of candidate path navigation videos based on the start reference object information and the end point reference information.
- the server may intercept the target path navigation video from the stored candidate path navigation video based on the starting reference object information and the endpoint reference information, and the server may parse the multi-frame video image from the candidate path navigation video, and Extracting one candidate reference object information from the multi-frame video picture to obtain a plurality of candidate reference object information, among which a candidate reference object information identical to the start point reference object information is selected, and the selected candidate is selected.
- the video picture in which the reference information is located is determined as a start point video picture, and from the plurality of candidate reference object information, the same candidate reference object information as the end point reference object information is selected, and the video picture in which the selected candidate reference object information is located is determined.
- the server then intercepts the video between the start video picture and the end video picture from the candidate path navigation video to obtain the target path navigation video.
- the server When the server obtains the target path navigation video from the stored plurality of candidate path navigation videos based on the starting point reference information and the end point reference information, the plurality of candidate path navigation videos each include a starting point candidate reference information and an end point candidate. Referring to the object information, the server can obtain the starting point candidate of the plurality of candidate path navigation videos when the target path navigation video is selected from the stored plurality of candidate path navigation videos based on the start reference object information and the end point reference information.
- Reference object information and end point candidate reference object information based on the start point candidate reference object information of the plurality of candidate path navigation videos, from the plurality of candidate path navigation videos, selecting the candidate path with the same starting point candidate reference information and the starting reference object information Navigating a video; determining whether the end point candidate reference object information of the selected candidate path navigation video is the same as the end point reference object information; and selecting the candidate when the end point candidate reference object information of the selected candidate path navigation video is different from the end point reference object information
- the end of the path navigation video The candidate reference object information is used as the starting point reference information, and the starting point candidate reference object information based on the plurality of candidate path navigation videos is returned, and the starting point candidate reference object information and the starting point reference object information are selected from the plurality of candidate path navigation videos.
- the server may select at least one candidate path navigation video on the target path from the plurality of candidate path navigation videos, and compose the at least one candidate path navigation video into the target path navigation video.
- the server extracts the starting point text information from the starting point environment image, and extracts the ending point text information from the end point environment image, determines the starting point text information as the starting point position information, and determines the ending point text information as the ending position information, based on the starting point. Text information and end point text information, get the target path navigation video.
- the server can perform character recognition on the starting environment image, obtain the starting point text information, and perform character recognition on the end point environment image to obtain the ending text information, based on the starting point text information and the ending text information, from a stored candidate path. Capture the target path navigation video in the navigation video.
- the server may perform text recognition on the starting point environment image, obtain starting point text information, and perform character recognition on the end point environment image to obtain end point text information, and based on the starting point text information and the ending point text information, from the stored plurality of candidate path navigation videos. , get the target path navigation video.
- the server may intercept the target path navigation video from the stored candidate path navigation video based on the start text information and the end point text information, and the server may parse the multi-frame video image from the candidate path navigation video, and One candidate text information is extracted from each of the multi-frame video images to obtain a plurality of candidate text information, among which the candidate text information identical to the start text information is selected, and the video image of the selected candidate text information is determined. And selecting, from the plurality of candidate text information, candidate text information identical to the end point text information, and determining a video screen in which the selected candidate text information is located as an end video screen, after which the server selects the candidate In the path navigation video, the video between the start video screen and the end video image is intercepted, and the target path navigation video is obtained.
- the plurality of candidate path navigation videos may each include start point candidate text information and end point candidate text information. Therefore, the server scans the plurality of candidate path navigation videos based on the start point text information and the end point text information.
- the server may obtain start point candidate text information and end point candidate text information of the plurality of candidate path navigation videos; and navigate from the plurality of candidate paths based on the start candidate candidate text information of the plurality of candidate path navigation videos In the video, the candidate path navigation video with the same start point candidate text information and the start point text information is selected; whether the end point candidate text information of the selected candidate path navigation video and the end point text information are the same; and the candidate candidate path navigation video end point candidate text information is selected When the end point text information is different from the end point text information, the end point candidate text information of the selected candidate path navigation video is used as the starting point text information, and the start point candidate text information based on the plurality of candidate path navigation videos is returned, and the video is navigated from the plurality of candidate
- the server may select at least one candidate path navigation video on the target path from the plurality of candidate path navigation videos, and compose the at least one candidate path navigation video into the target path navigation video.
- the server performs character recognition on the image of the starting point environment, and obtains the starting point text information as A, and performs character recognition on the image of the end point environment, and obtains the end point text information as F.
- the plurality of candidate path navigation videos acquired by the server are respectively the navigation video 21, The navigation video 22, the navigation video 23, the navigation video 24, and the navigation video 25, and the start point candidate text information of the navigation video 21 is A, the end point candidate text information is B, and the start point candidate text information of the navigation video 22 is D, the end point candidate text information.
- the start point candidate text information of the navigation video 23 is B
- the end point candidate text information is D
- the start point candidate text information of the navigation video 24 is G
- the end point candidate text information is H
- the start point candidate text information of the navigation video 25 is M
- the destination candidate text information is N
- the server selects the candidate path navigation video having the same starting point candidate text information and the starting point text information as the navigation video 21 and the end point of the navigation video 21 from the five candidate path navigation videos based on the starting point text information A.
- the candidate text information B is different from the final text information F, so the server will navigate
- the end point candidate character information B of the video 21 is used as the start point character information, and among the five candidate route navigation videos, the candidate route navigation video having the same start point candidate character information and the start point text information is selected as the navigation video 23, and the end point candidate of the navigation video 23 Since the character information D is different from the end point text information F, the server selects the end point candidate character information D of the navigation video 23 as the start point text information, and selects the candidate path in which the start point candidate text information and the start point text information are the same from the five navigation videos.
- the navigation video is a navigation video 22, and the endpoint candidate for the navigation video 22
- the text information F is the same as the end text information F.
- the server may determine the navigation video 21, the navigation video 23, and the navigation video 22 as at least one candidate path navigation video on the target path, and navigate the video based on the at least one candidate path. Make up the target path navigation video.
- the server can obtain the target path navigation video separately according to the above two methods.
- the server can also combine the above two methods to obtain the target path navigation video, thereby improving the acquisition accuracy of the target path navigation video.
- the above-mentioned starting point information and the end point information may be not only text information, image information, but also GPS information, etc., therefore, the server can not only obtain the target path navigation video through the starting point position information and the ending position information by the above method,
- the server may also intercept the target path navigation video from a stored candidate path navigation video based on the start point information and the end point information.
- the server may acquire the target path navigation video from the stored plurality of candidate path navigation videos based on the start point information and the end point information.
- the method for intercepting the target path navigation video from the stored one candidate path navigation video may be the same as the foregoing method, and the server may navigate the video from the stored multiple candidate paths based on the start point information and the end point information based on the start point information and the end point information.
- the method for obtaining the target path navigation video may also be the same as the above method, which is not described in detail in the embodiment of the present disclosure.
- the server may acquire the candidate path navigation video before acquiring the target path navigation video from the start point to the end point based on the start point information and the end point information.
- the server may obtain the mobile video and the location information, where the location information is the location information corresponding to the target image that is collected when the video capture device is in the static state during the process of collecting the mobile video, and then The server associates the location information with the target image to obtain a candidate path navigation video.
- the location information may include reference information or text information.
- the location information may also include other information, such as geographic location information, which is not specifically limited in the embodiment of the present disclosure.
- the server may further base information based on the starting point and the geographic location information of the destination. From a stored candidate navigation video, intercept the navigation video between the starting point and the ending geographic location, and intercept the target path navigation video from the intercepted navigation video according to the above method. At this time, the candidate path navigation video will be Associate the geographic location information of each video frame.
- the server may also select the location geography from the plurality of candidate path navigation videos based on the starting point geographic location information and the ending geographic location information. The candidate path navigation video between the location and the destination geographic location, and from the selected candidate path navigation video, obtain the target path navigation video according to the above method.
- the plurality of candidate path navigation videos may be the candidate path navigation videos of the plurality of locations, that is, the plurality of candidate path navigation videos may be the candidate path navigation videos corresponding to the plurality of geographic location information.
- the candidate path navigation video corresponding to the plurality of geographic location information is stored in the plurality of candidate path navigation videos, the correspondence between the geographical location information and the candidate path navigation video is generally stored, and the location of each geographic location information may be Will include multiple indoor spaces, so in order to navigate indoors within the location of the geographic location information, and improve The accuracy of the indoor navigation, when the server receives the geographical location information of the current location of the target device, the server may obtain the geographical location from the correspondence between the geographic location information and the candidate path navigation video based on the geographical location information. The plurality of candidate path navigation videos corresponding to the information, and further acquiring the target path navigation video from the plurality of candidate path navigation videos corresponding to the geographic location information.
- the mobile video and location information acquired by the server may be sent by multiple video collection devices, or may be sent by the server. Is sent by a video collection device, and when the location information is geographical location information, the location information corresponding to each target image in the mobile video is the same, therefore, the server can receive the mobile video and the geographic information sent by the at least one video collection device.
- the server may identify a plurality of target images from the mobile video sent by the video capture device; and based on the multiple target images, the video capture device The transmitted mobile video is decomposed to obtain a plurality of candidate path navigation videos; and the plurality of candidate path navigation videos are stored based on the geographic location information of the video collection device.
- the moving video can include a plurality of target images, and each target image is obtained by capturing an indoor location where the reference information or the text information exists, the plurality of target images identified from the moving video can be distinguished.
- a plurality of indoor locations that is, the mobile video can identify a plurality of indoor locations.
- the server may decompose the mobile video based on the plurality of target images in the mobile video to obtain a plurality of candidate path navigation videos.
- a plurality of video images may be included in one video, and at least two video images of the same and continuous image are present in the multi-frame video image, at least two video images of the same and consecutive images may be determined as the target image. Therefore, when the server identifies a plurality of target images from the mobile video sent by the video capture device, the server may acquire the multi-frame video image included in the mobile video, and perform adjacent video images in the multi-frame video image. Comparing, when there are at least two frames of video images of the same and continuous image in the multi-frame video image, the server may determine the same and consecutive at least two frames of video images of the image as the target image, thereby identifying more from the mobile video. Target images.
- the server may further determine a similarity of the adjacent at least two video images in the multi-frame video image, and determine the adjacent at least two video images when the similarity of the adjacent at least two video images is greater than the specified similarity.
- the target image for the mobile video may be set in advance, for example, the specified similarity may be 80%, 90%, and the like, which is not specifically limited in the embodiment of the present disclosure.
- the mobile video sent by the video capture device is mobile video 1
- the server obtains the multi-frame video images included in the mobile video 1 as image 1, image 2, image 3, image 4, image 5, image 6, image 7, Image 8, image 9, image 10, image 50, and compare adjacent video images in the multi-frame video image to determine that image 1, image 2, and image 3 in the multi-frame video image are continuous and the images are the same Video image, determining that image 8 and image 9 are continuous and identical video images, determining that image 15, image 16 and image 17 are continuous and image phase
- the same video image it is determined that the image 22, the image 23, and the image 24 are continuous and identical video images, and the image 30 and the image 31 are determined to be continuous and the same image of the video, and the image 43, the image 44, and the image 45 are determined to be continuous and Image 1 and image 3 are determined as the first target image of the moving video, and image 8 and image 9 are determined.
- Determining to be the second target image of the moving video determining that the image 15, the image 16 and the image 17 are the third target image of the moving video, and determining that the image 22, the image 23 and the image 24 are the fourth target of the moving video Image, determining that image 30 and image 31 are the fifth target image of the moving video, determining image 43, image 44 and image 45 as the sixth target image of the moving video, and determining image 49 and image 50 as the moving video
- the seventh target image Determining to be the second target image of the moving video, determining that the image 15, the image 16 and the image 17 are the third target image of the moving video, and determining that the image 22, the image 23 and the image 24 are the fourth target of the moving video Image, determining that image 30 and image 31 are the fifth target image of the moving video, determining image 43, image 44 and image 45 as the sixth target image of the moving video, and determining image 49 and image 50 as the moving video
- the seventh target image Determining to be the second target image of the moving video, determining that the image 15, the
- the server decomposes the mobile video sent by the video collection device based on the multiple target images, and the operation of obtaining the plurality of candidate path navigation videos may be: performing character recognition on the plurality of target images separately, and obtaining multiple key text information.
- the server separately performs character recognition on the plurality of target images to obtain a plurality of key reference object information; and based on the plurality of key reference object information, decomposing the mobile video sent by the video collection device to obtain a plurality of candidate path navigation videos,
- the plurality of candidate path navigation videos include start point candidate reference information and end point candidate reference information, and the end candidate candidate reference information and the second candidate path navigation video of the first candidate path navigation video in the plurality of candidate path navigation videos
- the first candidate path navigation video and the second candidate path navigation video are any candidate path navigation video of the plurality of candidate path navigation videos, and the second candidate path navigation video is the first candidate The next candidate path navigation video adjacent to the path navigation video.
- the server may store the multiple candidate path navigation videos based on the geographic location information of the video collection device, where the server may store the geographic location information of the video collection device and the multiple candidate path navigation videos. The correspondence between the geographical location information and the candidate path navigation video.
- the plurality of target images identified by the server from the mobile video 1 are image 1, image 8, image 16, image 23, image 30, image 44, and image 49; respectively, the server performs text recognition on the plurality of images, respectively
- the key text information of image 1 is A
- the key text information of image 8 is B
- the key text information of image 16 is C
- the key text information of image 23 is D
- the key text information of image 30 is E
- the key text of image 44 is
- the key text information of the information F and the image 49 is G.
- the plurality of candidate path navigation videos decomposed by the mobile video 1 are navigation video 1, navigation video 2, navigation video 3, navigation video. 4.
- Navigation video 5 and navigation video 6, and the starting text information of navigation video 1 is A
- the ending text information is B
- the starting text information of navigation video 2 is B
- the ending text information is C
- the end point text information is D
- the starting point text information of navigation video 4 is D
- the ending text information is E
- the starting point text information of navigation video 5 is E.
- the text information of the end point is F
- the text information of the starting point of the navigation video 6 is F
- the text information of the ending point is G
- the geographical location information of the video collecting device is the geographical location information 1, after which the server can set the geographical location information 1
- the plurality of candidate path navigation videos are stored in a correspondence relationship between the geographical location information and the candidate path navigation video shown in Table 1 below.
- the target device may be any one of the multiple video capture devices.
- the target device may also be other than the multiple video capture devices, that is, the target device may be a video capture device.
- the target device may also be a device other than the video capture device, which is not specifically limited in this embodiment of the present disclosure.
- the target device may also be a video capture device
- the target device may also acquire mobile video and location information; send the mobile video and the location information to the server, so that the server associates the mobile video with the location information.
- the server may further decompose the mobile video into a plurality of candidate path navigation videos, and store the plurality of candidate path navigation videos.
- the target device may also acquire the mobile video and the location information, associate the mobile video with the location information, obtain the candidate path navigation video, and send the candidate path navigation video to the server.
- the server needs to identify a plurality of target images from the mobile video sent by the video capture device, and based on the plurality of target images, the mobile video is decomposed, and the text marks in the plurality of target images are used to identify the indoor location. Therefore, when the target device records the moving video on the walking route, it is necessary to have a stay at the position where the reference object information or the text information exists, thereby forming a plurality of target images in the moving video.
- the staying time of the target device at the location of the reference information or the text information may be determined by the user, and the dwell time may be 1 second, 2 seconds, etc., which is not specifically limited in the embodiment of the present disclosure.
- the video image included in the mobile video transmitted by the video capture device or the target device is an image of the indoor location, and thus the video image included in the stored candidate path navigation video is also an image of the indoor location.
- the method for the server to perform character recognition on the plurality of target images may refer to related technologies, which is not described in detail in the embodiments of the present disclosure.
- the server stores the candidate path based on the correspondence between the geographic location information and the candidate path navigation video.
- the navigation video can accurately match the candidate path navigation video and the corresponding geographic location information, thereby improving indoor navigation efficiency and accuracy; and the plurality of video collection devices send the captured mobile video and the geographical location information of the mobile video.
- the server can update and store multiple candidate path navigation videos in time, further improving the navigation accuracy.
- step 303 the server sends a target path navigation video to the target device.
- the target path navigation video may be captured from a candidate path navigation video, or may be composed of at least one candidate path navigation video, and when the target path navigation video is composed of at least one candidate path navigation video.
- the server may determine a path sequence of the at least one candidate path navigation video based on the start point candidate text information and the end point candidate text information of the at least one candidate path navigation video, and send the path sequence and the at least one candidate path navigation video to the Target device.
- the server may further send the at least one candidate path navigation video to the target device based on the path sequence, so that the target device determines the path sequence of the at least one candidate path navigation video based on the receiving time of the at least one candidate path navigation video.
- the target device may further perform character recognition on the at least one candidate path navigation video, so as to determine, based on the start point candidate text information and the end point candidate text information of the at least one candidate path navigation video.
- the server may extract start point candidate reference information and end point candidate reference information from the at least one candidate path navigation video, thereby referencing a start candidate reference based on the at least one candidate path navigation video
- the object information and the endpoint candidate reference information are used to determine the path sequence of the at least one candidate path navigation video, which is not specifically limited in this embodiment of the present invention.
- the server determines the path sequence of the at least one candidate path navigation video based on the start point candidate text information and the end point candidate text information of the at least one candidate path navigation video, and selects a starting point from the video set for the third candidate path navigation video.
- the candidate path information is the same candidate path navigation video as the end candidate candidate text information of the third candidate path navigation video
- the third candidate path navigation video is any candidate path navigation video of the at least one candidate path navigation video
- the video set includes the And selecting at least one candidate path navigation video, except for the candidate path navigation video remaining in the third candidate path navigation video, setting a path sequence of the selected candidate path navigation video after the third candidate path navigation video, and determining a candidate other than the selected one in the video set Whether there is a candidate path navigation video outside the path navigation video, if present, the selected candidate path navigation video is used as the third candidate path navigation video, and the selected candidate path navigation video is removed from the video set to update the video set And returning, according to the updated video set, a step of executing a candidate path navigation
- the method for determining the path order of the at least one candidate path navigation video based on the start point candidate reference object information and the end point candidate reference object information of the at least one candidate path navigation video is the same as the method for determining the path order based on the text information.
- the disclosed embodiments are not described in detail herein.
- the at least one candidate path navigation video is the navigation video 21, the navigation video 23, and the navigation video 22.
- the start point candidate text information of the navigation video 21 is A
- the end point candidate text information is B
- the start point candidate text information of the navigation video 22 is D
- the destination candidate text information is F
- the start candidate text information of the navigation video 23 is B
- the destination candidate text information is D.
- the third candidate path navigation video is the navigation video 21, the navigation video 22 and the navigation video 23 constitute a video collection. From the video set, the candidate path navigation video that selects the same end point candidate text information as the end point candidate text information B of the navigation video 21 is the navigation video 23, and determines that there is a candidate path navigation video in addition to the navigation video 23 in the video set.
- the navigation video 23 is used as the third candidate path navigation video, and the navigation video 23 is removed from the video collection to obtain an updated video collection. From the updated video collection, the end point candidate text information and the end point of the navigation video 23 are selected.
- the candidate path navigation video with the same candidate text information D is the navigation video 22, and the update is determined.
- the candidate video navigation video does not exist except for the navigation video 22 in the subsequent video collection. Therefore, the path sequence of the at least one candidate path navigation video is determined to be the navigation video 21, the navigation video 23, and the navigation video 22.
- the server may determine multiple navigation path videos, and then select a default navigation path video from the multiple navigation path videos, and determine the default navigation path video as the target.
- the path navigation is sent to the target device, so that the target device navigates based on the target path navigation video.
- the server may also send the plurality of navigation path videos to the target device separately, so that the user selects a navigation path video from the plurality of navigation path videos for navigation, which is not specifically limited in the embodiment of the disclosure.
- step 304 when the target device receives the target path navigation video sent from the start point to the end point sent by the server, the target path navigation video is played in response to the received navigation trigger operation.
- the target device When the target device receives the target path navigation video sent by the server from the start point to the end point, if the target device receives the navigation trigger operation, the target device may trigger the operation in response to the navigation and play the target path navigation video.
- the navigation triggering operation may be triggered by the user, which is not specifically limited in this embodiment of the present disclosure.
- the target device when the target device plays the target path navigation video, the target device can also detect the current motion speed, and then play the target path navigation video based on the current motion speed, so that the playback speed of the target path navigation video is equal to the current motion speed. To improve navigation.
- the target device may And playing, according to the path sequence of the at least one candidate path navigation video, the at least one candidate path navigation video; for each candidate path navigation video in the at least one candidate path navigation video, when playing the candidate path navigation video and the location information
- the route confirmation prompt information is displayed, and the route confirmation prompt information is used to prompt the user to confirm whether the navigation path deviates.
- the location information is corresponding to the target image collected by the video capture device during the process of collecting the mobile video.
- the route re-planning request is sent to the server.
- the server receives the path re-planning request sent by the target device, the server re-plans the request to acquire the new target path navigation video, and sends a new target path navigation video to the target device, so that the target device navigates based on the new target path navigation video.
- each candidate path navigation video is also associated with the location information, and therefore, for each of the at least one candidate path navigation video
- the candidate path navigation video when the target device plays the position associated with the location information in the candidate path navigation video, the target device may pause the playback of the candidate path navigation video and display the route confirmation prompt information.
- the target device may determine that the navigation path currently being navigated does not match the expected path of the user, and the target device may re-plan the navigation path through the server. Further, when the target device receives the confirmation instruction based on the route confirmation prompt information, the target device determines that the navigation path currently being navigated matches the expected path of the user, and the target device may continue to play the candidate path navigation video, and if currently played, The candidate path navigation video is the last candidate path navigation video in the at least one candidate path navigation video, and then the candidate path navigation video is stopped.
- the route confirmation prompt information may be a text information, a voice information, or a combination of the text information and the voice information, which is not specifically limited in the embodiment of the present disclosure.
- the route re-planning instruction and the confirmation instruction may be triggered by a user, and the user may trigger the operation by a specified operation, which is not specifically limited in the embodiment of the disclosure.
- the server may re-select the target path navigation video based on the new starting point information and the ending point information;
- the target path navigation video is sent to the target device, causing the target device to navigate based on the reselected target path navigation video.
- the location where the target device sends the route re-planning request may be the same as or different from the location where the original starting point information is located. Therefore, the route re-planning request sent by the target device to the server may be The new geographic location information may not be carried, and of course, the new geographic location information may be carried, and the new geographic location information is the geographical location information when the target device sends the route re-planning request. If the route rescheduling request does not carry the new geographic location information, the server may reselect the target path navigation video from the plurality of candidate path navigation videos acquired in step 302.
- the server may re-acquire the multiple candidate path navigation videos corresponding to the new geographic location information from the stored multiple candidate path navigation videos based on the new geographic location information, and further Re-acquire multiple candidate path navigation videos and select the target path navigation video.
- the video when the outdoor navigation is performed, the video is navigated through the target path, the navigation can be more intuitively navigated, and the navigation threshold is lowered, and when indoor navigation is performed, the reference information or text information naturally existing in the indoor location is adopted.
- the target path navigation video on the target path is determined for navigation, thereby eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and Save a lot of physical equipment and labor.
- the user can determine whether the target path of the current navigation is deviated by confirming the target path navigation video and the actual path in real time, and when the deviation occurs, the target path can be re-determined.
- Navigation videos improve the accuracy of navigation.
- FIG. 4 is a block diagram of a navigation device, according to an exemplary embodiment. As shown in FIG. 4, the device includes a first receiving module 401, a first obtaining module 402, and a first sending module 403.
- the first receiving module 401 is configured to receive start point information and end point information sent by the target device;
- the first obtaining module 402 is configured to acquire, according to the starting point information received by the first receiving module 401 and the destination information, a target path navigation video from the starting point to the ending point;
- the first sending module 403 is configured to send, to the target device, the target path navigation video acquired by the first acquiring module 402.
- the first obtaining module 402 includes a first acquiring unit 4021.
- the first obtaining unit 4021 is configured to acquire the target path navigation video based on the start position information and the end position information, where the start point information includes the start position information, and the end point information includes the end position information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- the first obtaining unit 4021 includes a first extracting subunit 40211, a first determining subunit 40212, and a first obtaining subunit 40213;
- a first extraction subunit 40211 configured to extract starting point reference information from the starting environment image, and extract end point reference information from the end environment image;
- a first determining sub-unit 40212 configured to determine the starting point reference information extracted by the first extracting sub-unit 40211 as starting point position information, and determine the end point reference object information extracted by the extracting sub-unit 40211 as end point position information;
- the first obtaining sub-unit 40213 is configured to obtain the target path navigation video based on the starting reference object information determined by the first determining sub-unit 40212 and the end point reference information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- the first obtaining unit 4021 includes a second extracting subunit 40214, a second determining subunit 40215, and a second obtaining subunit 40216;
- a second extracting sub-unit 40214 configured to extract starting point text information from the starting point environment image, and extract end point text information from the end point environment image;
- a second determining sub-unit 40215 configured to determine the starting point text information extracted by the second extracting sub-unit 40214 as starting point position information, and determine the ending point text information extracted by the second extracting sub-unit as end point position information;
- the second obtaining sub-unit 40216 is configured to obtain the target path navigation video based on the starting text information and the ending text information determined by the second determining sub-unit 40215.
- the first acquisition module 402 includes an intercept unit 4022.
- the intercepting unit 4022 is configured to intercept the target path navigation video from the stored one candidate path navigation video based on the start point information and the end point information.
- the first obtaining module 402 includes a second acquiring unit 4023 .
- the second obtaining unit 4023 is configured to guide, from the stored multiple candidate paths, based on the start point information and the end point information. In the navigation video, obtain the target path navigation video.
- the apparatus further includes a second acquisition module 404.
- the second obtaining module 404 is configured to acquire a candidate path navigation video.
- the second obtaining module 404 further includes a third obtaining unit 4041 and an associating unit 4042.
- the third obtaining unit 4041 is configured to acquire a mobile video and location information, where the location information is location information corresponding to the target image that is collected when the video capture device is in a static state during the process of acquiring the mobile video;
- the association unit 4042 is configured to associate the location information acquired by the third acquiring unit with the target image to obtain a candidate path navigation video.
- the location information includes reference information or text information.
- the apparatus further includes a second receiving module 405, a third obtaining module 406, and a second sending module 407.
- a second receiving module 405, configured to receive a path re-planning request sent by the target device
- a third obtaining module 406, configured to acquire a new target path navigation video based on the path re-planning request received by the second receiving module 405;
- the second sending module 407 is configured to send the new target path navigation video acquired by the third obtaining module 406 to the target device, so that the target device navigates based on the new target path navigation video.
- the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that The target device navigates based on the target path navigation video, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a large amount of physical equipment and labor.
- FIG. 13 is a block diagram of a navigation device, according to an exemplary embodiment. As shown in FIG. 13, the apparatus includes a first obtaining module 1301, a first sending module 1302, a receiving module 1303, and a playing module 1304.
- the first obtaining module 1301 is configured to acquire start point information and end point information
- the first sending module 1302 is configured to send the start point information and the end point information acquired by the first obtaining module 1301 to the server;
- the receiving module 1303 is configured to receive a target path navigation video sent by the server from the start point to the end point, where the target path navigation video is obtained by the server according to the first sending module 1302 sending the start point information and the end point information;
- the playing module 1304 is configured to play the target path navigation video received by the receiving module 1303 in response to the received navigation triggering operation.
- the first obtaining module 1301 includes an obtaining unit 13011.
- the obtaining unit 13011 is configured to acquire a start environment image and an end environment image when receiving the navigation instruction.
- the play module 1304 includes a detecting unit 13041 and a playing unit 13042.
- a detecting unit 13041 configured to detect a current motion speed
- the playing unit 13042 is configured to play the target path navigation video based on the motion speed detected by the detecting unit, so that the playing speed of the target path navigation video is equal to the moving speed.
- the play module 1304 includes a display unit 13043 and a transmitting unit 13044.
- a display unit 13043 configured to display a route confirmation prompt information, when the target image position in the target path navigation video is played, the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated;
- the sending unit 13044 is configured to, when receiving the route re-planning instruction based on the route confirmation prompt information displayed by the display unit, send a route re-planning request to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
- the apparatus further includes a second obtaining module 1305 and a second sending module 1306.
- a second obtaining module 1305, configured to acquire mobile video and location information
- the second sending module 1306 is configured to send the mobile video and the location information acquired by the second acquiring module to the server, so that the server associates the mobile video with the target image, where the location information is in the process of collecting the mobile video.
- the position information corresponding to the target image acquired when it is at rest.
- the apparatus further includes a third obtaining module 1307, an associating module 1308, and a third sending module 1309.
- a third obtaining module 1307 configured to acquire mobile video and location information
- the association module 1308 is configured to associate the mobile video acquired by the third obtaining module with the target image to obtain a candidate path navigation video, where the location information is location information corresponding to the target image collected when the mobile video is in a stationary state. ;
- the third sending module 1309 is configured to send the candidate path navigation video associated with the associated module to the server.
- the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that The target device navigates based on the target path navigation video, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a large amount of physical equipment and labor.
- FIG. 19 is a block diagram of an apparatus 1900 for navigation, according to an exemplary embodiment.
- device 1900 can be provided as a server.
- apparatus 1900 includes a processing component 1922 that further includes one or more processors, and memory resources represented by memory 1932 for storing instructions executable by processing component 1922, such as an application.
- An application stored in memory 1932 can include one or more modules each corresponding to a set of instructions.
- Apparatus 1900 can also include a power supply component 1926 configured to perform power management of apparatus 1900, a wired or wireless network interface 1950 configured to connect apparatus 1900 to the network, and an input/output (I/O) interface 1958.
- Apparatus 1900 may operate based on an operating system stored in the memory 1932, for example, Windows Server TM, Mac OS X TM , Unix TM, Linux TM, FreeBSD TM or the like.
- processing component 1922 is configured to execute instructions to perform the method of navigation described below, the method comprising:
- the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
- the target path navigation video is acquired based on the start position information and the end position information, and the start point information includes the start position information, and the end point information includes the end position information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- Obtaining the target path navigation video based on the start position information and the end position information including:
- the target path navigation video is acquired based on the starting point reference information and the end point reference information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- Obtaining the target path navigation video based on the start position information and the end position information including:
- the target path navigation video is acquired based on the start point text information and the end point text information.
- the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
- the target path navigation video from the start point to the end point is obtained based on the start point information and the end point information, including:
- the method before acquiring the target path navigation video from the start point to the end point based on the start point information and the end point information, the method further includes:
- acquiring a candidate path navigation video includes:
- the location information is location information corresponding to the target image collected by the video capture device during the process of acquiring the mobile video
- the location information is associated with the target image to obtain a candidate path navigation video.
- the location information includes reference information or text information.
- the method further includes:
- the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
- the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
- FIG. 20 is a block diagram of an apparatus 2000 for navigation, according to an exemplary embodiment.
- device 2000 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, smart glasses, a smart watch, and the like.
- apparatus 2000 may include one or more of the following components: processing component 2002, memory 2004, power component 2006, multimedia component 2008, audio component 2010, input/output (I/O) interface 2012, sensor component 2014, And communication component 2016.
- Processing component 2002 typically controls the overall operation of device 2000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- Processing component 2002 may include one or more processors 2020 to execute instructions to perform all or part of the steps of the above described methods.
- processing component 2002 can include one or more modules to facilitate interaction between component 2002 and other components.
- processing component 2002 can include a multimedia module to facilitate interaction between multimedia component 2008 and processing component 2002.
- the memory 2004 is configured to store various types of data to support operation at the device 2000. Examples of such data include instructions for any application or method operating on device 2000, contact data, phone book data, messages, pictures, videos, and the like.
- Memory 2004 can be any type of volatile or non-volatile storage device or it Their combined implementations, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable programmable read only memory (EPROM), programmable read only memory (PROM), Read memory (ROM), magnetic memory, flash memory, disk or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable programmable read only memory
- PROM programmable read only memory
- ROM Read memory
- magnetic memory magnetic memory
- flash memory disk or optical disk.
- Power component 2006 provides power to various components of device 2000.
- Power component 2006 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 2000.
- the multimedia component 2008 includes a screen between the device 2000 and the user that provides an output interface.
- the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
- the multimedia component 2008 includes a front camera and/or a rear camera. When the device 2000 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 2010 is configured to output and/or input audio signals.
- audio component 2010 includes a microphone (MIC) that is configured to receive an external audio signal when device 2000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
- the received audio signal may be further stored in memory 2004 or transmitted via communication component 2016.
- the audio component 2010 also includes a speaker for outputting an audio signal.
- the I/O interface 2012 provides an interface between the processing component 2002 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
- the sensor assembly 2014 includes one or more sensors for providing a status assessment of various aspects to the device 2000.
- sensor assembly 2014 can detect an open/closed state of device 2000, a relative positioning of components, such as the display and keypad of device 2000, and sensor component 2014 can also detect a change in position of one component of device 2000 or device 2000. The presence or absence of contact by the user with the device 2000, the orientation or acceleration/deceleration of the device 2000 and the temperature change of the device 2000.
- the sensor assembly 2014 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- Sensor assembly 2014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor assembly 2014 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 2016 is configured to facilitate wired or wireless communication between device 2000 and other devices.
- the device 2000 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
- the communication component 2016 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel.
- the communication component 2016 also includes a near field communication (NFC) module to facilitate short range communication.
- NFC near field communication
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- device 2000 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- non-transitory computer readable storage medium comprising instructions, such as a memory 2004 comprising instructions executable by processor 2020 of apparatus 2000 to perform the above method.
- the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
- a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of a target device, enabling the target device to perform a navigation method, the method comprising:
- the target path navigation video is obtained by the server based on the start point information and the end point information.
- the start point information includes a start point environment image
- the end point information includes an end point environment image
- Get start and end points including:
- the start environment image and the end environment image are acquired.
- playing the target path navigation video includes:
- the target path navigation video is played such that the playback speed of the target path navigation video is equal to the motion speed.
- playing the target path navigation video includes:
- the route confirmation prompt information is displayed, and the route confirmation prompt information is used to prompt the user to confirm whether the target path is deviated, and the location information is that the video collection device is collecting mobile.
- a route re-planning request is sent to the server, so that the server acquires a new target path navigation video based on the path re-planning request.
- the method further includes:
- the mobile video and the location information are sent to the server, causing the server to associate the mobile video with the location information.
- the method further includes:
- the candidate path navigation video is sent to the server.
- the start point information and the end point information sent by the target device are received, and based on the start point information and the end point information, the target path navigation video from the start point to the end point is acquired, and the target path navigation video is sent to the target device, so that the target device is based on the target.
- the path navigation video is navigated to navigate more intuitively, reducing the navigation threshold, eliminating the need to manually establish a unified infrared sensing device, which is versatile and adaptable, and saves a lot of physical equipment and labor.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Library & Information Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Navigation (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
- Atmospheric Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Ecology (AREA)
- Environmental & Geological Engineering (AREA)
- Environmental Sciences (AREA)
- Instructional Devices (AREA)
Abstract
Description
Claims (34)
- 一种导航方法,其特征在于,所述方法包括:接收目标设备发送的起点信息和终点信息;基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;向所述目标设备发送所述目标路径导航视频。
- 如权利要求1所述的方法,其特征在于,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:基于起点位置信息和终点位置信息获取所述目标路径导航视频,所述起点信息包括所述起点位置信息,所述终点信息包括所述终点位置信息。
- 如权利要求2所述的方法,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;所述基于起点位置信息和终点位置信息获取所述目标路径导航视频,包括:从所述起点环境图像中提取起点参照物信息,并从所述终点环境图像中提取终点参照物信息;将所述起点参照物信息确定为起点位置信息,并将所述终点参照物信息确定为终点位置信息;基于所述起点参照物信息和所述终点参照物信息,获取所述目标路径导航视频。
- 如权利要求2所述的方法,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;所述基于起点位置信息和终点位置信息获取所述目标路径导航视频,包括:从所述起点环境图像中提取起点文字信息,并从所述终点环境图像中提取终点文字信息;将所述起点文字信息确定为起点位置信息,并将所述终点文字信息确定为终点位置信息;基于所述起点文字信息和所述终点文字信息,获取所述目标路径导航视频。
- 如权利要求1所述的方法,其特征在于,所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频,包括:基于所述起点信息和所述终点信息,从存储的一个候选路径导航视频中截取所述目标路径导航视频。
- 如权利要求1所述的方法,其特征在于,所述基于所述起点信息和所述终点信息, 获取从起点到达终点的目标路径导航视频,包括:基于所述起点信息和所述终点信息,从存储的多个候选路径导航视频中,获取所述目标路径导航视频。
- 如权利要求1至6任一权利要求所述的方法,其特征在于,在所述基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频之前,所述方法还包括:获取候选路径导航视频。
- 如权利要求7所述的方法,其特征在于,所述获取候选路径导航视频,包括:获取移动视频和位置信息,所述位置信息为视频采集设备在采集所述移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;将所述位置信息与所述目标图像进行关联,得到候选路径导航视频。
- 如权利要求8所述的方法,其特征在于,所述位置信息包括参照物信息或文字信息。
- 如权利要求1所述的方法,其特征在于,在所述向所述目标设备发送所述目标路径导航视频之后,所述方法还包括:接收所述目标设备发送的路径重新规划请求;基于所述路径重新规划请求获取新目标路径导航视频;向所述目标设备发送所述新目标路径导航视频,使所述目标设备基于所述新目标路径导航视频进行导航。
- 一种导航方法,其特征在于,所述方法包括:获取起点信息和终点信息;向服务器发送所述起点信息和所述终点信息;接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述起点信息和所述终点信息获取得到;响应于接收到的导航触发操作,播放所述目标路径导航视频。
- 如权利要求11所述的方法,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;所述获取起点信息和终点信息,包括:当接收到导航指令时,获取起点环境图像和终点环境图像。
- 如权利要求11所述的方法,其特征在于,所述播放所述目标路径导航视频,包括:检测当前的运动速度;基于所述运动速度,播放所述目标路径导航视频,使所述目标路径导航视频的播放速度与所述运动速度相等。
- 如权利要求11所述的方法,其特征在于,所述播放所述目标路径导航视频,包括:当播放到所述目标路径导航视频中目标图像位置处时,显示路线确认提示信息,所述路线确认提示信息用于提示用户确认所述目标路径是否偏离;当基于所述路线确认提示信息接收到路线重新规划指令时,向所述服务器发送路线重新规划请求,使所述服务器基于所述路径重新规划请求获取新目标路径导航视频。
- 如权利要求11至14任一权利要求所述的方法,其特征在于,所述方法还包括:获取移动视频和位置信息;将所述移动视频和所述位置信息发送给所述服务器,使所述服务器将所述移动视频和目标图像进行关联,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息。
- 如权利要求11至14任一权利要求所述的方法,其特征在于,所述方法还包括:获取移动视频和位置信息;将所述移动视频和目标图像进行关联,得到候选路径导航视频,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息;将所述候选路径导航视频发送给所述服务器。
- 一种导航装置,其特征在于,所述装置包括:第一接收模块,用于接收目标设备发送的起点信息和终点信息;第一获取模块,用于基于所述第一接收模块接收的所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;第一发送模块,用于向所述目标设备发送所述获取模块获取的所述目标路径导航视频。
- 如权利要求17所述的装置,其特征在于,所述第一获取模块包括:第一获取单元,用于基于起点位置信息和终点位置信息获取所述目标路径导航视频,所述起点信息包括所述起点位置信息,所述终点信息包括所述终点位置信息。
- 如权利要求18所述的装置,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;所述第一获取单元包括:第一提取子单元,用于从所述起点环境图像中提取起点参照物信息,并从所述终点环境图像中提取终点参照物信息;第一确定子单元,用于将所述第一提取子单元提取的所述起点参照物信息确定为起点位置信息,并将所述提取子单元提取的所述终点参照物信息确定为终点位置信息;第一获取子单元,用于基于所述第一确定子单元确定的所述起点参照物信息和所述终点参照物信息,获取所述目标路径导航视频。
- 如权利要求18所述的装置,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;所述第一获取单元包括:第二提取子单元,用于从所述起点环境图像中提取起点文字信息,并从所述终点环境图像中提取终点文字信息;第二确定子单元,用于将所述第二提取子单元提取的所述起点文字信息确定为起点位置信息,并将所述第二提取子单元提取的所述终点文字信息确定为终点位置信息;第二获取子单元,用于基于所述第二确定子单元确定的所述起点文字信息和所述终点文字信息,获取所述目标路径导航视频。
- 如权利要求17要求所述的装置,其特征在于,所述第一获取模块包括:截取单元,用于基于所述起点信息和所述终点信息,从存储的一个候选路径导航视频中截取所述目标路径导航视频。
- 如权利要求17所述的装置,其特征在于,所述第一获取模块包括:第二获取单元,用于基于所述起点信息和所述终点信息,从存储的多个候选路径导航视频中,获取所述目标路径导航视频。
- 如权利要求17至22任一权利要求所述的装置,其特征在于,所述装置还包括:第二获取模块,用于获取候选路径导航视频。
- 如权利要求23所述的装置,其特征在于,所述第二获取模块包括:第三获取单元,用于获取移动视频和位置信息,所述位置信息为视频采集设备在采集所述移动视频过程中,处于静止状态时采集的目标图像对应的位置信息;关联单元,用于将所述第三获取单元获取的所述位置信息与所述目标图像进行关联,得到候选路径导航视频。
- 如权利要求24所述的装置,其特征在于,所述位置信息包括参照物信息或文字信息。
- 如权利要求17所述的装置,其特征在于,所述装置还包括:第二接收模块,用于接收所述目标设备发送的路径重新规划请求;第三获取模块,用于基于所述第二接收模块接收的所述路径重新规划请求获取新目标路径导航视频;第二发送模块,用于向所述目标设备发送所述第三获取模块获取的所述新目标路径导航视频,使所述目标设备基于所述新目标路径导航视频进行导航。
- 一种导航装置,其特征在于,所述装置包括:第一获取模块,用于获取起点信息和终点信息;第一发送模块,用于向服务器发送所述第一获取模块获取的所述起点信息和所述终点信息;接收模块,用于接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述第一发送模块发送的所述起点信息和所述终点信息获取得到;播放模块,用于响应于接收到的导航触发操作,播放所述接收模块接收的所述目标路径导航视频。
- 如权利要求27所述的装置,其特征在于,所述起点信息包括起点环境图像,所述终点信息包括终点环境图像;所述第一获取模块包括:获取单元,用于当接收到导航指令时,获取起点环境图像和终点环境图像。
- 如权利要求27所述的装置,其特征在于,所述播放模块包括:检测单元,用于检测当前的运动速度;播放单元,用于基于所述检测单元检测的所述运动速度,播放所述目标路径导航视频,使所述目标路径导航视频的播放速度与所述运动速度相等。
- 如权利要求27所述的装置,其特征在于,所述播放模块包括:显示单元,用于当播放到所述目标路径导航视频中目标图像位置处时,显示路线确认 提示信息,所述路线确认提示信息用于提示用户确认所述目标路径是否偏离;发送单元,用于当基于所述显示单元显示的所述路线确认提示信息接收到路线重新规划指令时,向所述服务器发送路线重新规划请求,使所述服务器基于所述路径重新规划请求获取新目标路径导航视频。
- 如权利要求27至30任一权利要求所述的装置,其特征在于,所述装置还包括:第二获取模块,用于获取移动视频和位置信息;第二发送模块,用于将第二获取模块获取的所述移动视频和所述位置信息发送给所述服务器,使所述服务器将所述移动视频和目标图像进行关联,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息。
- 如权利要求27至30任一权利要求所述的装置,其特征在于,所述装置还包括:第三获取模块,用于获取移动视频和位置信息;关联模块,用于将所述第三获取模块获取的所述移动视频和目标图像进行关联,得到候选路径导航视频,所述位置信息为采集移动视频过程中,处于静止状态时采集的所述目标图像对应的位置信息;第三发送模块,用于将所述关联模块关联的所述候选路径导航视频发送给所述服务器。
- 一种导航装置,其特征在于,所述装置包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:接收目标设备发送的起点信息和终点信息;基于所述起点信息和所述终点信息,获取从起点到达终点的目标路径导航视频;向所述目标设备发送所述目标路径导航视频。
- 一种导航装置,其特征在于,所述装置包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:获取起点信息和终点信息;向服务器发送所述起点信息和所述终点信息;接收所述服务器发送的从起点到达终点的目标路径导航视频,所述目标路径导航视频是所述服务器基于所述起点信息和所述终点信息获取得到;响应于接收到的导航触发操作,播放所述目标路径导航视频。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020167007053A KR101870052B1 (ko) | 2015-09-29 | 2015-12-30 | 네비게이션 방법, 장치, 프로그램 및 기록 매체 |
MX2016004100A MX368765B (es) | 2015-09-29 | 2015-12-30 | Método y dispositivo de navegación. |
JP2017542265A JP6387468B2 (ja) | 2015-09-29 | 2015-12-30 | ナビゲーション方法、装置、プログラム及び記録媒体 |
RU2016112941A RU2636270C2 (ru) | 2015-09-29 | 2015-12-30 | Способ и устройство навигации |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510634512.8A CN105222773B (zh) | 2015-09-29 | 2015-09-29 | 导航方法及装置 |
CN201510634512.8 | 2015-09-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017054358A1 true WO2017054358A1 (zh) | 2017-04-06 |
Family
ID=54991863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/099732 WO2017054358A1 (zh) | 2015-09-29 | 2015-12-30 | 导航方法及装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US10267641B2 (zh) |
EP (1) | EP3150964B1 (zh) |
JP (1) | JP6387468B2 (zh) |
KR (1) | KR101870052B1 (zh) |
CN (1) | CN105222773B (zh) |
MX (1) | MX368765B (zh) |
RU (1) | RU2636270C2 (zh) |
WO (1) | WO2017054358A1 (zh) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11709070B2 (en) * | 2015-08-21 | 2023-07-25 | Nokia Technologies Oy | Location based service tools for video illustration, selection, and synchronization |
CN107449439A (zh) * | 2016-05-31 | 2017-12-08 | 沈阳美行科技有限公司 | 关联存储、同步展示行车路径和行车照片的方法及系统 |
CN105973227A (zh) * | 2016-06-21 | 2016-09-28 | 上海磐导智能科技有限公司 | 可视化实景导航方法 |
CN107576332B (zh) * | 2016-07-04 | 2020-08-04 | 百度在线网络技术(北京)有限公司 | 一种换乘导航的方法和装置 |
CN106323289A (zh) | 2016-08-23 | 2017-01-11 | 北京小米移动软件有限公司 | 平衡车的控制方法及装置 |
CN106403978A (zh) | 2016-09-30 | 2017-02-15 | 北京百度网讯科技有限公司 | 导航路线生成方法和装置 |
CN108020231A (zh) * | 2016-10-28 | 2018-05-11 | 大辅科技(北京)有限公司 | 一种基于视频的地图系统及导航方法 |
US10172760B2 (en) * | 2017-01-19 | 2019-01-08 | Jennifer Hendrix | Responsive route guidance and identification system |
US10824870B2 (en) * | 2017-06-29 | 2020-11-03 | Accenture Global Solutions Limited | Natural language eminence based robotic agent control |
DE102019206250A1 (de) * | 2019-05-01 | 2020-11-05 | Siemens Schweiz Ag | Regelung und Steuerung der Ablaufgeschwindigkeit eines Videos |
CN110470293B (zh) * | 2019-07-31 | 2021-04-02 | 维沃移动通信有限公司 | 一种导航方法及移动终端 |
CN110601925B (zh) * | 2019-10-21 | 2021-07-27 | 秒针信息技术有限公司 | 一种信息筛选方法、装置、电子设备及存储介质 |
CN114096803A (zh) * | 2019-11-06 | 2022-02-25 | 倬咏技术拓展有限公司 | 用于显示前往目的地的最短路径的3d视频生成 |
CN111009148A (zh) * | 2019-12-18 | 2020-04-14 | 斑马网络技术有限公司 | 车辆导航方法、终端设备及服务器 |
WO2021226779A1 (zh) * | 2020-05-11 | 2021-11-18 | 蜂图志科技控股有限公司 | 一种图像导航方法、装置、设备及可读存储介质 |
CN111896003A (zh) * | 2020-07-28 | 2020-11-06 | 广州中科智巡科技有限公司 | 一种用于实景路径导航的方法及系统 |
CN112201072A (zh) * | 2020-09-30 | 2021-01-08 | 姜锡忠 | 城市交通路径规划方法及系统 |
CN113012355A (zh) * | 2021-03-10 | 2021-06-22 | 北京三快在线科技有限公司 | 一种充电宝租借方法及装置 |
CN113395462B (zh) * | 2021-08-17 | 2021-12-14 | 腾讯科技(深圳)有限公司 | 导航视频生成、采集方法、装置、服务器、设备及介质 |
CN114370884A (zh) * | 2021-12-16 | 2022-04-19 | 北京三快在线科技有限公司 | 导航方法及装置、电子设备及可读存储介质 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2337653A (en) * | 1998-05-19 | 1999-11-24 | Pleydell Bouverie David Archie | Route calculation and display apparatus |
CN101131327A (zh) * | 2006-08-25 | 2008-02-27 | 联发科技股份有限公司 | 路线规划触发方法及路线规划装置 |
US20090254265A1 (en) * | 2008-04-08 | 2009-10-08 | Thimmannagari Chandra Reddy | Video map technology for navigation |
CN101701827A (zh) * | 2009-10-20 | 2010-05-05 | 深圳市凯立德计算机系统技术有限公司 | 一种路径指引方法和路径指引设备 |
CN101719130A (zh) * | 2009-11-25 | 2010-06-02 | 中兴通讯股份有限公司 | 街景地图的实现方法和实现系统 |
CN102012233A (zh) * | 2009-09-08 | 2011-04-13 | 中华电信股份有限公司 | 街景视图动态导航系统及其方法 |
TW201317547A (zh) * | 2011-10-18 | 2013-05-01 | Nat Univ Chung Hsing | 產生全景實境路徑預覽影片檔之方法及預覽系統 |
US20140181259A1 (en) * | 2012-12-21 | 2014-06-26 | Nokia Corporation | Method, Apparatus, and Computer Program Product for Generating a Video Stream of A Mapped Route |
CN104819723A (zh) * | 2015-04-29 | 2015-08-05 | 京东方科技集团股份有限公司 | 一种定位方法和定位服务器 |
CN105222802A (zh) * | 2015-09-22 | 2016-01-06 | 小米科技有限责任公司 | 导航、导航视频生成方法及装置 |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6133853A (en) * | 1998-07-30 | 2000-10-17 | American Calcar, Inc. | Personal communication and positioning system |
JP3735301B2 (ja) * | 2002-02-06 | 2006-01-18 | 財団法人鉄道総合技術研究所 | 案内システム |
JP2004005493A (ja) * | 2002-04-24 | 2004-01-08 | Vehicle Information & Communication System Center | 運転者支援情報送信装置及び運転者支援情報受信装置ならびに運転者支援情報提供システム |
US20030210806A1 (en) * | 2002-05-07 | 2003-11-13 | Hitachi, Ltd. | Navigational information service with image capturing and sharing |
JP2003329462A (ja) * | 2002-05-08 | 2003-11-19 | Hitachi Ltd | 映像配信装置および映像情報配信システム |
GB0215217D0 (en) * | 2002-06-29 | 2002-08-14 | Spenwill Ltd | Position referenced multimedia authoring and playback |
JP2006521033A (ja) * | 2003-03-19 | 2006-09-14 | シンクウェア システムズ コーポレーション | 移動通信端末機を用いたナビゲーションシステムおよび方法 |
JP3725134B2 (ja) | 2003-04-14 | 2005-12-07 | 株式会社エヌ・ティ・ティ・ドコモ | 移動通信システム、移動通信端末、及びプログラム。 |
US8220020B2 (en) * | 2003-09-30 | 2012-07-10 | Sharp Laboratories Of America, Inc. | Systems and methods for enhanced display and navigation of streaming video |
JP4725375B2 (ja) * | 2006-03-14 | 2011-07-13 | 株式会社ケンウッド | ナビゲーション装置、プログラム及び方法 |
KR101407210B1 (ko) * | 2007-06-28 | 2014-06-12 | 엘지전자 주식회사 | 네비게이션을 이용한 특정 지점의 설정 방법 및 시스템 |
KR100952248B1 (ko) * | 2007-12-26 | 2010-04-09 | 엘지전자 주식회사 | 이동 단말기 및 이를 이용한 내비게이션 방법 |
KR20090074378A (ko) * | 2008-01-02 | 2009-07-07 | 삼성전자주식회사 | 휴대 단말기 및 그 네비게이션 기능 수행 방법 |
KR20090080589A (ko) * | 2008-01-22 | 2009-07-27 | (주)엠앤소프트 | 무선 통신을 이용한 네비게이션 장치의 위치 안내 방법 및장치 |
US8032296B2 (en) * | 2008-04-30 | 2011-10-04 | Verizon Patent And Licensing Inc. | Method and system for providing video mapping and travel planning services |
CN101655369A (zh) * | 2008-08-22 | 2010-02-24 | 环达电脑(上海)有限公司 | 利用图像识别技术实现定位导航的系统及方法 |
KR101555552B1 (ko) * | 2008-12-29 | 2015-09-24 | 엘지전자 주식회사 | 네비게이션 장치 및 그의 네비게이팅 방법 |
KR20110002517A (ko) * | 2009-07-02 | 2011-01-10 | 주식회사 디지헤드 | 이동통신단말기를 이용한 내비게이션 방법, 이 기능을 수행하는 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체, 및 이 기록매체를 탑재한 이동통신단말기 |
US20110102637A1 (en) * | 2009-11-03 | 2011-05-05 | Sony Ericsson Mobile Communications Ab | Travel videos |
KR101662595B1 (ko) * | 2009-11-03 | 2016-10-06 | 삼성전자주식회사 | 사용자 단말 장치, 경로 안내 시스템 및 그 경로 안내 방법 |
US8838381B1 (en) | 2009-11-10 | 2014-09-16 | Hrl Laboratories, Llc | Automatic video generation for navigation and object finding |
US8762041B2 (en) * | 2010-06-21 | 2014-06-24 | Blackberry Limited | Method, device and system for presenting navigational information |
KR101337446B1 (ko) * | 2012-05-08 | 2013-12-10 | 박세진 | 약도를 이용한 내비게이션 시스템의 경로구축 방법 |
JP2014006190A (ja) * | 2012-06-26 | 2014-01-16 | Navitime Japan Co Ltd | 情報処理システム、情報処理装置、サーバ、端末装置、情報処理方法および情報処理プログラム |
JP6083019B2 (ja) * | 2012-09-28 | 2017-02-22 | 株式会社ユピテル | システム、プログラム、撮像装置、及び、ソフトウェア |
JP5949435B2 (ja) * | 2012-10-23 | 2016-07-06 | 株式会社Jvcケンウッド | ナビゲーションシステム、映像サーバ、映像管理方法、映像管理プログラム、及び映像提示端末 |
US20140372841A1 (en) | 2013-06-14 | 2014-12-18 | Henner Mohr | System and method for presenting a series of videos in response to a selection of a picture |
-
2015
- 2015-09-29 CN CN201510634512.8A patent/CN105222773B/zh active Active
- 2015-12-30 KR KR1020167007053A patent/KR101870052B1/ko active IP Right Grant
- 2015-12-30 MX MX2016004100A patent/MX368765B/es active IP Right Grant
- 2015-12-30 JP JP2017542265A patent/JP6387468B2/ja active Active
- 2015-12-30 RU RU2016112941A patent/RU2636270C2/ru active
- 2015-12-30 WO PCT/CN2015/099732 patent/WO2017054358A1/zh active Application Filing
-
2016
- 2016-04-28 US US15/141,815 patent/US10267641B2/en active Active
- 2016-06-10 EP EP16174024.6A patent/EP3150964B1/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2337653A (en) * | 1998-05-19 | 1999-11-24 | Pleydell Bouverie David Archie | Route calculation and display apparatus |
CN101131327A (zh) * | 2006-08-25 | 2008-02-27 | 联发科技股份有限公司 | 路线规划触发方法及路线规划装置 |
US20090254265A1 (en) * | 2008-04-08 | 2009-10-08 | Thimmannagari Chandra Reddy | Video map technology for navigation |
CN102012233A (zh) * | 2009-09-08 | 2011-04-13 | 中华电信股份有限公司 | 街景视图动态导航系统及其方法 |
CN101701827A (zh) * | 2009-10-20 | 2010-05-05 | 深圳市凯立德计算机系统技术有限公司 | 一种路径指引方法和路径指引设备 |
CN101719130A (zh) * | 2009-11-25 | 2010-06-02 | 中兴通讯股份有限公司 | 街景地图的实现方法和实现系统 |
TW201317547A (zh) * | 2011-10-18 | 2013-05-01 | Nat Univ Chung Hsing | 產生全景實境路徑預覽影片檔之方法及預覽系統 |
US20140181259A1 (en) * | 2012-12-21 | 2014-06-26 | Nokia Corporation | Method, Apparatus, and Computer Program Product for Generating a Video Stream of A Mapped Route |
CN104819723A (zh) * | 2015-04-29 | 2015-08-05 | 京东方科技集团股份有限公司 | 一种定位方法和定位服务器 |
CN105222802A (zh) * | 2015-09-22 | 2016-01-06 | 小米科技有限责任公司 | 导航、导航视频生成方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3150964B1 (en) | 2019-09-11 |
CN105222773A (zh) | 2016-01-06 |
JP6387468B2 (ja) | 2018-09-05 |
MX2016004100A (es) | 2018-06-22 |
CN105222773B (zh) | 2018-09-21 |
US10267641B2 (en) | 2019-04-23 |
KR101870052B1 (ko) | 2018-07-20 |
JP2017534888A (ja) | 2017-11-24 |
US20170089714A1 (en) | 2017-03-30 |
KR20170048240A (ko) | 2017-05-08 |
EP3150964A1 (en) | 2017-04-05 |
MX368765B (es) | 2019-10-15 |
RU2016112941A (ru) | 2017-10-11 |
RU2636270C2 (ru) | 2017-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017054358A1 (zh) | 导航方法及装置 | |
KR101680714B1 (ko) | 실시간 동영상 제공 방법, 장치, 서버, 단말기기, 프로그램 및 기록매체 | |
WO2017032126A1 (zh) | 无人机的拍摄控制方法及装置、电子设备 | |
WO2017049796A1 (zh) | 导航、导航视频生成方法及装置 | |
EP3163569A1 (en) | Method and device for controlling a smart device by voice, control device and smart device | |
CN106165430A (zh) | 视频直播方法及装置 | |
US10451434B2 (en) | Information interaction method and device | |
WO2017177607A1 (zh) | 障碍物定位方法、装置及系统 | |
CN103906235A (zh) | 终端定位的方法及终端 | |
CN108108461B (zh) | 确定封面图像的方法及装置 | |
EP3026876A1 (en) | Method for acquiring recommending information, terminal and server | |
US10111026B2 (en) | Detecting method and apparatus, and storage medium | |
CN114009003A (zh) | 图像采集方法、装置、设备及存储介质 | |
CN112146676B (zh) | 信息导航方法、装置、设备及存储介质 | |
US20170034347A1 (en) | Method and device for state notification and computer-readable storage medium | |
CN115552879A (zh) | 锚点信息处理方法、装置、设备及存储介质 | |
CN110673732A (zh) | 场景共享方法及装置、系统、电子设备和存储介质 | |
WO2022110801A1 (zh) | 数据处理方法及装置、电子设备和存储介质 | |
CN106354808A (zh) | 图像存储方法及装置 | |
KR20170037862A (ko) | 문자열 저장방법 및 장치 | |
CN107315590B (zh) | 通知消息处理方法及装置 | |
CN114726999B (zh) | 图像采集方法、图像采集装置及存储介质 | |
KR20090074378A (ko) | 휴대 단말기 및 그 네비게이션 기능 수행 방법 | |
CN113132531B (zh) | 照片展示方法、装置及存储介质 | |
EP4030294A1 (en) | Function control method, function control apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20167007053 Country of ref document: KR Kind code of ref document: A Ref document number: 2017542265 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2016/004100 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2016112941 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15905268 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15905268 Country of ref document: EP Kind code of ref document: A1 |