CN112781601A - Navigation method and device - Google Patents

Navigation method and device Download PDF

Info

Publication number
CN112781601A
CN112781601A CN202110062452.2A CN202110062452A CN112781601A CN 112781601 A CN112781601 A CN 112781601A CN 202110062452 A CN202110062452 A CN 202110062452A CN 112781601 A CN112781601 A CN 112781601A
Authority
CN
China
Prior art keywords
navigation
location
destination location
information
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110062452.2A
Other languages
Chinese (zh)
Inventor
李荣浩
许鹏飞
王亮
徐斌
马朝伟
蔡超
张松
章磊
刘涛
杨涛
胡萌
周康
马利
胡润波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202110062452.2A priority Critical patent/CN112781601A/en
Publication of CN112781601A publication Critical patent/CN112781601A/en
Priority to PCT/CN2022/071005 priority patent/WO2022152081A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3423Multimodal routing, i.e. combining two or more modes of transportation, where the modes can be any of, e.g. driving, walking, cycling, public transport
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3438Rendez-vous, i.e. searching a destination where several users can meet, and the routes to this destination for these users; Ride sharing, i.e. searching a route such that at least two users can share a vehicle for at least part of the route

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

According to an embodiment of the present disclosure, a navigation method, apparatus, device, storage medium, and program product are provided. The method proposed herein comprises: obtaining a destination location associated with the first object, wherein the destination location is further associated with the second object, the destination location indicating a predetermined location where the first object can meet the second object; determining a current location of a first object; and providing a navigation interface including a navigation element superimposed on the live-action image associated with the current location, the navigation element determined based on the destination location and the current location. According to the fact of the disclosure, more intuitive navigation can be provided for the objects to be converged, so that the efficiency of the convergence between the objects is improved.

Description

Navigation method and device
Technical Field
Implementations of the present disclosure relate to the field of intelligent transportation, and more particularly, to navigation methods, apparatuses, devices, storage media, and program products.
Background
In people's daily lives, people often need to move to another location to meet a particular object. For example, in a transit trip scenario, a passenger may need to walk to a designated pickup point in the trip to board a service vehicle. However, such a merge location may be automatically specified by the system and is not necessarily well known. Accordingly, the user may experience trouble in traveling to such a merge location.
Disclosure of Invention
Embodiments of the present disclosure provide a solution for navigation.
In a first aspect of the disclosure, a navigation method is provided. The method comprises the following steps: obtaining a destination location associated with the first object, wherein the destination location is further associated with the second object, the destination location indicating a predetermined location where the first object can meet the second object; determining a current location of a first object; and providing a navigation interface including a navigation element superimposed on the live-action image associated with the current location, the navigation element determined based on the destination location and the current location.
In a second aspect of the disclosure, a navigation device is provided. The device includes: a destination location acquisition module configured to acquire a destination location associated with the first object, wherein the destination location is further associated with the second object, the destination location indicating a predetermined location where the first object can meet the second object; a current location determination module configured to determine a current location of the first object; and a navigation module configured to provide a navigation interface including a navigation element superimposed on the live-action image associated with the current location, the navigation element determined based on the destination location and the current location.
In a third aspect of the present disclosure, there is provided an electronic device comprising one or more processors and memory for storing computer-executable instructions for execution by the one or more processors to implement a method according to the first aspect of the present disclosure.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided having computer-executable instructions stored thereon, wherein the computer-executable instructions, when executed by a processor, implement a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, a computer program product is provided comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method according to the first aspect of the present disclosure.
According to the embodiment of the disclosure, more intuitive navigation can be provided for the objects to be converged, so that the efficiency of convergence between the objects is improved.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
1A-1C illustrate schematic diagrams of example environments in which some embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of an example navigation process, according to some embodiments of the present disclosure;
FIG. 3 illustrates a flow diagram of an example process of determining a current location, in accordance with some embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of an example navigation interface, in accordance with some embodiments of the present disclosure;
FIG. 5 shows a schematic view of an example navigation interface, according to further embodiments of the present disclosure;
FIG. 6 illustrates a schematic diagram of an example navigation interface, in accordance with further embodiments of the present disclosure;
figure 7 shows a schematic block diagram of a navigation device according to some embodiments of the present disclosure; and
FIG. 8 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As discussed above, in a scenario where different objects need to be merged at a particular location, such a merging location may be a location with which the objects are unfamiliar. This makes it difficult for the object to efficiently travel to the designated merging position. Some conventional approaches guide the travel of objects by way of ordinary map navigation. However, in the case where the user is unfamiliar with the meeting place, it is still difficult for the user to accurately travel to the meeting place.
In view of this, embodiments of the present disclosure propose navigation schemes. In this scheme, first, a destination position associated with the first object is obtained, wherein the destination position is also associated with the second object, and the destination position indicates a predetermined location where the first object can meet the second object. Subsequently, a current location of the first object is determined, and a navigation interface is provided, wherein the navigation interface comprises a navigation element superimposed on a live-action image associated with the current location, and the navigation element is determined based on the destination location and the current location.
According to the scheme, the real-scene images around the objects can be utilized to provide more intuitive navigation for the objects to be converged, so that the efficiency of convergence between the objects is improved.
Some example embodiments of the disclosure will now be described with continued reference to the accompanying drawings.
Example Environment
Reference is first made to fig. 1A-1C, which schematically illustrate a schematic of an environment in which embodiments of the present disclosure may be implemented.
FIG. 1A illustrates a first example environment 100A according to an embodiment of the disclosure. In the example environment 100A of fig. 1A, the first object 110 may comprise a user and the second object 120 may comprise a vehicle for serving a travel trip associated with the first object 110.
As shown in fig. 1A, first object 110 may be associated with terminal device 140 (also referred to as a first terminal device). The terminal device 140 may be, for example, a mobile device with positioning capabilities, such as a smartphone, a tablet, a personal digital assistant PDA, or a smart wearable device (e.g., a smart watch, smart glasses, a smart bracelet, etc.).
In the example of fig. 1A, the terminal device 140 may determine that the first object 110 needs to travel to the destination location 130 to join with the second object 120. In some implementations, the destination location 130 may be determined, for example, based on a travel order for the first object 110.
Illustratively, the terminal device 140 may take a travel order for the first object 110 and a stop location, e.g., a pick-up point, specified by the travel platform as the destination location 130.
In some implementations, the stop location may be automatically determined, for example, by the travel platform based on the starting location in the travel order. For example, a stop location near the start of a travel order and not falling within a no-go-stop range may be automatically provided. In such a case, such a destination location 130 is typically represented in absolute coordinate values, which makes it difficult for the first object to accurately understand the exact location of the destination location 130.
In still other examples, the vehicle may not be mounted after the first object 110 and the second object 120 have merged. Illustratively, the second object 120 may be a vehicle for delivering goods that meets the first object 110 to load the goods that the first object 110 needs to transport and transport the activity to the end of the designated trip point.
As shown in fig. 1A, the terminal device 140 may provide a navigation interface 150 when the first object 110 begins traveling to the destination location 130. In some implementations, the terminal device 140 may obtain at least one navigation element for guiding the first object to travel to the destination location 130. Such a navigation element may be determined, for example, by a suitable computing device (e.g., terminal device 140 or a server providing navigation services) based on the current location of first object 110 and destination location 130. It should be appreciated that in case the navigation element is directly determined by the terminal device 140, this may enable the terminal device 140 to guarantee a proper provision of navigation also without a communication network.
Additionally, in navigation interface 150, terminal device 140 may overlay such navigation elements on the live-action image associated with the current location. In some implementations, the live-action image may be acquired by an image capture device of the terminal device 140. For example, the first object 110 may, for example, hold a terminal device 140, such as a smartphone, while the first object 110 travels to the destination location 130. The terminal device 140 may keep the image capturing apparatus turned on during the travel of the first object 110 to capture a live-action image of the current location and present the navigation interface 150 through the display device. Such a navigation interface 150 may include a captured live-action image and navigation elements displayed superimposed on the live-action image.
In some implementations, the terminal device 140 may include, for example, smart glasses or other smart head mounted devices. Accordingly, as the first object 110 travels toward the destination location 130, the terminal device 140 may acquire the at least one navigation element and superimpose the at least one navigation element on the corresponding live-action image in real-time using smart glasses. It should be understood that in this case, the live view image is an image of the environment that is actually seen by the first object 110, not a digital image captured by the terminal device 140.
In some implementations, the first object 110 may travel to the destination location 130 in an appropriate manner. For example, the first object 110 may travel to the destination location 130 by walking. Alternatively, the first object 110 may also travel to the destination location 130 by way of a ride.
In this way, the embodiments of the present disclosure can efficiently guide the user to a specified service location in the travel order, for example, a boarding location, a pick-up location, and the like, thereby improving the efficiency of merging the user with the vehicle serving the order.
FIG. 1B illustrates a second example environment 100B according to another embodiment of this disclosure. In the example environment 100B of FIG. 1B, the first object 110 and the second object 120 may comprise a set of users to be merged.
As shown in fig. 1B, the first object 110 may be associated with a terminal device 140. The terminal device 140 may be, for example, a mobile device with positioning capabilities, such as a smartphone, a tablet, a personal digital assistant PDA, or a smart wearable device (e.g., a smart watch, smart glasses, a smart bracelet, etc.).
In the example of fig. 1B, the terminal device 140 may determine that the first object 110 needs to travel to the destination location 130 to join with the second object 120. In some implementations, the destination location 130 may be a meeting location automatically determined by the system for the group of users. For example, the system may automatically determine a meeting location as the destination location 130 based on the current location of each user in the group of users. Alternatively, the destination location 130 may be specified by a user in the group of users, for example. In such a case, some users may be unfamiliar with the system-specified fusion location or other user-specified fusion locations, which may make it difficult for the first object 110 to accurately understand the exact location of the destination location 130.
As shown in fig. 1B, the terminal device 140 may provide a navigation interface 150 when the first object 110 begins traveling to the destination location 130. In some implementations, the terminal device 140 may obtain at least one navigation element for guiding the first object to travel to the destination location 130. Such a navigation element may be determined, for example, by a suitable computing device (e.g., terminal device 140 or a server providing navigation services) based on the current location of first object 110 and destination location 130. It should be appreciated that in case the navigation element is directly determined by the terminal device 140, this may enable the terminal device 140 to guarantee a proper provision of navigation also without a communication network.
Additionally, in navigation interface 150, terminal device 140 may overlay such navigation elements on the live-action image associated with the current location. In some implementations, the live-action image may be acquired by an image capture device of the terminal device 140. For example, the first object 110 may, for example, hold a terminal device 140, such as a smartphone, while the first object 110 travels to the destination location 130. The terminal device 140 may keep the image capturing apparatus turned on during the travel of the first object 110 to capture a live-action image of the current location and present the navigation interface 150 through the display device. Such a navigation interface 150 may include a captured live-action image and navigation elements displayed superimposed on the live-action image.
In some implementations, the terminal device 140 may include, for example, smart glasses or other smart head mounted devices. Accordingly, as the first object 110 travels toward the destination location 130, the terminal device 140 may acquire the at least one navigation element and superimpose the at least one navigation element on the corresponding live-action image in real-time using smart glasses. It should be understood that in this case, the live view image is an image of the environment that is actually seen by the first object 110, not a digital image captured by the terminal device 140.
In some implementations, the first object 110 may travel to the destination location 130 in an appropriate manner. For example, the first object 110 may travel to the destination location 130 by walking. Alternatively, the first object 110 may also travel to the destination location 130 by way of a ride.
In this way, embodiments of the present disclosure can efficiently guide users to a designated place of confluence for a group of users, thereby improving the efficiency of confluence between users.
FIG. 1C illustrates a third example environment 100C according to an embodiment of the disclosure. In the example environment 100C of fig. 1C, the first object 110 may comprise a user and the second object 120 may comprise a vehicle, such as an automobile, an electric vehicle, or a bicycle, etc., parked at a particular location.
As shown in fig. 1C, the first object 110 may be associated with a terminal device 140. The terminal device 140 may be, for example, a mobile device with positioning capabilities, such as a smartphone, a tablet, a personal digital assistant PDA, or a smart wearable device (e.g., a smart watch, smart glasses, a smart bracelet, etc.).
In the example of fig. 1C, the terminal device 140 may determine that the first object 110 needs to travel to the destination location 130 to join with the second object 120. In some implementations, the terminal device 140 may obtain the parking location of the second object 120 from other computing devices as the destination location. For example, after a particular user (which may be the first object 110, or a user other than the first object 110) is parked at a particular parking location in the vehicle, a terminal device associated with the particular user (e.g., the particular user's mobile device or the vehicle terminal) may automatically determine the current location as the parking location and upload the location, for example, to a server, or directly to the terminal device 140.
Accordingly, the terminal device 140 may, for example, retrieve the parking location from a server or receive the parking location from a terminal device associated with a particular user. Alternatively, the terminal device 140 may also automatically record the parking position during participation in the parking process (e.g., as a driver or passenger) by the first object 110.
In the case where the first object 110 needs to seek the car, the terminal device 140 may automatically acquire the parking position of the second object 120 as the destination position 130. When the vehicle is parked in a location unfamiliar to the user, it is often difficult for the user to efficiently travel to the parked location of the vehicle by memory. This causes the user to often spend a significant amount of time finding the vehicle.
As shown in fig. 1C, terminal device 140 may provide a navigation interface 150 that terminal device 140 may provide to direct first object 110 to travel to destination location 130, i.e., the stop location for second object 120.
In some implementations, the terminal device 140 may obtain at least one navigation element for guiding the first object to travel to the destination location 130. Such a navigation element may be determined, for example, by a suitable computing device (e.g., terminal device 140 or a server providing navigation services) based on the current location of first object 110 and destination location 130. It should be appreciated that in case the navigation element is directly determined by the terminal device 140, this may enable the terminal device 140 to guarantee a proper provision of navigation also without a communication network.
Additionally, in navigation interface 150, terminal device 140 may overlay such navigation elements on the live-action image associated with the current location. In some implementations, the live-action image may be acquired by an image capture device of the terminal device 140. For example, the first object 110 may, for example, hold a terminal device 140, such as a smartphone, while the first object 110 travels to the destination location 130. The terminal device 140 may keep the image capturing apparatus turned on during the travel of the first object 110 to capture a live-action image of the current location and present the navigation interface 150 through the display device. Such a navigation interface 150 may include a captured live-action image and navigation elements displayed superimposed on the live-action image.
In some implementations, the terminal device 140 may include, for example, smart glasses or other smart head mounted devices. Accordingly, as the first object 110 travels toward the destination location 130, the terminal device 140 may acquire the at least one navigation element and superimpose the at least one navigation element on the corresponding live-action image in real-time using smart glasses. It should be understood that in this case, the live view image is an image of the environment that is actually seen by the first object 110, not a digital image captured by the terminal device 140.
In some implementations, the first object 110 may travel to the destination location 130 in an appropriate manner. For example, the first object 110 may travel to the destination location 130 by walking. Alternatively, the first object 110 may also travel to the destination location 130 by way of a ride.
In this way, embodiments of the present disclosure can help a user efficiently find a parked vehicle, thereby reducing the time cost that the user needs to spend in the process.
Example scenarios in which embodiments of the present disclosure can be implemented are described above in connection with fig. 1A-1C. It is to be understood that the navigation scheme according to the present disclosure may also be used in other suitable fusion scenarios without departing from the spirit of the present disclosure.
Example procedure
A navigation process according to an embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 6. Fig. 2 shows a schematic diagram of a navigation process 200 according to some embodiments of the present disclosure. For ease of discussion, a specific navigation process is discussed with reference to FIG. 1. Process 200 may be performed, for example, at terminal device 140 shown in fig. 1. It should be understood that process 200 may also include blocks not shown and/or may omit blocks shown. The scope of the present disclosure is not limited in this respect.
As shown in fig. 2, at block 202, the terminal device 140 obtains a destination location 130 associated with the first object 110, wherein the destination location 130 is also associated with the second object 120, the destination location 130 indicating a predetermined location where the first object 110 can meet the second object 120.
As discussed with reference to fig. 1A-1C, the destination location 130 may be a parking location of a vehicle, a meeting location of a group of users, or a parking location of a vehicle, among others. It should be understood that in the present disclosure, the destination location 130 is for a junction between objects, which is typically automatically specified by the system, thereby causing the user to have difficulty in accurately autonomously traveling to the destination location 130.
As discussed above, the terminal device 140 may determine the destination location 130 in an appropriate manner and will not be described again. In some implementations, the destination location may be represented, for example, using coordinates consisting of latitude and longitude.
At block 204, the terminal device 140 determines the current location of the first object 110. In some implementations, the terminal device 140 may periodically determine the current location of the first object, for example, at a frequency of 1Hz, thereby enabling more real-time and accurate navigation of the first object.
Depending on the particular scenario, the terminal device 140 may utilize appropriate positioning techniques to determine the current location of the first object 110. For example, in the outdoor walking or cycling navigation scenario, terminal device 140 may determine the current location using, for example, GPS location technology and inertial navigation location technology, among others. In the scenario of indoor navigation, the terminal device 140 may also acquire the current position based on, for example, visual positioning or UWB positioning technology or the like.
However, in the outdoor positioning scenario, the instability of the GPS signal may cause a large jitter or bias in the positioning. In order to improve the accuracy of positioning, the terminal device 140 may also obtain a more accurate current position by means of fusion filtering. The specific process of block 204 will be described below with reference to fig. 3.
As shown in FIG. 3, at block 302, the terminal device 140 may obtain positioning information of the terminal device 140 associated with the first object 110, wherein the positioning information includes inertial navigation dead reckoning information and assisted positioning information. In some implementations, the auxiliary positioning information may include at least one of: GPS positioning information, visual positioning information, or prediction information of a positioning model. Additionally, the positioning model may be a machine learning model and configured to determine an expected position or an expected velocity of the first object 110 based on inertial navigation sensor data of the terminal device 140.
In some implementations, in the context of walking navigation, the terminal device 140 may determine inertial dead reckoning information based on PDR (pedestrian dead reckoning) techniques. Additionally, the terminal device 140 may also acquire GPS positioning information or visual positioning information and correct the position determined via the PDR technique using the GPS positioning information or visual positioning information.
In some implementations, terminal device 140 may also utilize a location model to obtain the prediction information. Illustratively, the positioning model may acquire inertial navigation sensor data, e.g., gyroscope data and/or accelerometer data, of the terminal device 140 over a predetermined period of time in the past and may be able to predict a position (with a known initial position) or velocity of the first object at that time based on the inertial navigation sensor data.
In some implementations, the positioning model is trained based on training inertial navigation sensor data acquired by a first device and corresponding true position information determined by a second device having a positioning accuracy above a predetermined threshold, the first and second devices being physically coupled for synchronous motion to establish a correlation between the acquired training inertial navigation sensor data and the true position information.
For example, in training a positioning model, data for training may be acquired with a first device having inertial navigation sensors and a second device having more accurate positioning capabilities. In particular, the first device and the second device may be physically coupled such that both are associated with the same physical location. For example, the first device and the second device may be physically bound.
Additionally, the first device and the second device perform a predetermined motion and acquire inertial navigation sensor data of the first device and positioning data of the second device. In some implementations, the training data may be formed by appropriate clock alignment techniques such that inertial navigation sensor data is correlated to corresponding positioning data.
Such training data may be input to an appropriate machine learning model (including, but not limited to, a deep neural network, a convolutional neural network, a support vector machine model, a decision tree model, or the like). Specifically, the trained inertial navigation sensor data for a predetermined period of time may be used as an input feature of the model, and the true position information acquired by the second device and/or the velocity information determined based on the true position information may be used as a reference true value (ground-route) of the model, so that the difference between the predicted position and/or velocity of the machine learning model based on the input feature and the true value is less than a predetermined threshold. In this way, a predictive model may be obtained that is capable of predicting a current speed and/or a current position from inertial sensor data over a period of time.
At block 304, the terminal device 140 may correct the inertial navigation dead reckoning information based on the auxiliary positioning information to determine the current position of the first object 110. In some implementations, the terminal device 140 may adjust the state equations used to determine the inertial navigation position estimate information with the auxiliary positioning information as a constraint. Illustratively, the inertial navigation position estimate information is determined based on a Kalman filter.
In some implementations, the terminal device 140 may determine a correction amount for the inertial navigation position estimate information based on a difference between the auxiliary positioning information and the inertial navigation position estimate information, for example.
Based on the above-discussed fusion positioning technology, embodiments of the present disclosure can eliminate interference due to GPS signal loss or drift, thereby improving positioning accuracy.
With continued reference to fig. 2, at block 206, the terminal device 140 provides the navigation interface 150, wherein the navigation interface 150 includes a navigation element superimposed on the live-action image associated with the current location, the navigation element determined based on the destination location 130 and the current location.
FIG. 4 illustrates an example navigation interface 150 according to an embodiment of this disclosure. The navigation interface 150 may be presented, for example, by a display device of the terminal device 140. It should be understood that in the context of a smart-eye or smart headset, the presentation of the navigation interface 150 may be different.
As shown in fig. 4, the navigation interface 150 may include a live view image 410. As discussed above, the live-action image 410 may be captured by the terminal device 140, for example. In some implementations, navigation elements 415-1 and 415-2 are also included in navigation interface 150 that are superimposed on live-action image 410.
As shown in FIG. 4, the navigation element 415-1 may be, for example, an orientation element that is used to indicate the direction of the destination location 130 relative to the current location. This may reduce the computational load of the terminal device 140 by presenting directional elements rather than generating a complete navigation path. Furthermore, given that in certain scenarios, such as a travel scenario, the first object 110 is typically relatively small in position with the destination location 130, it has been possible to effectively direct the first object 110 to travel to the destination location 130 through the directional element.
In some implementations, the navigation element 415-2 may include, for example, a pass-through text element to, for example, indicate a distance of the destination location 130 from the current location. This can enable the first object 110 to better estimate the time required to travel to the destination location 130.
In still other implementations, the navigation interface 150 may also include other suitable navigation elements, for example, such navigation elements may also be determined based on a real-time path from the current location to the destination location 130, for example. Illustratively, the navigation interface 150 may include a path navigation element to guide the first object 110 along the path to the destination location 130.
In some implementations, as shown in FIG. 4, the navigation interface 150 may also include a two-dimensional map portion 420, where the two-dimensional map portion 420 includes visual elements 425-1 and 425-2 corresponding to the current location and the destination location 130 of the first object 110.
In some implementations, the two-dimensional map portion 420 also includes, for example, a second visual element 430 for indicating a direction of the destination location 130 relative to the current location.
Generally, in combination with two-dimensional map and live-action image guidance, the embodiments of the present disclosure enable the first object to intuitively understand the route that it needs to make, thereby improving the accuracy of navigation and the friendliness of user interaction.
In some implementations, the two-dimensional map portion 420 can be presented or collapsed in the navigation interface 150, for example, in response to a predetermined operation on the navigation interface 150. Illustratively, the two-dimensional map portion 420 may default to a presentation state, for example, and the first object 110 may present the live-action image 410 by clicking on the stow control 440 to stow the two-dimensional map portion 420, for example, thereby leaving a larger area. Subsequently, two-bit map portion 420 can be re-rendered according to a particular manipulation of first object 110 (e.g., a swipe from the bottom).
Alternatively, two-dimensional map portion 420 may default to a stowed state, for example, and first object 110 may perform a predetermined operation (e.g., slide up from the bottom) to cause two-dimensional map portion 420 to be presented. It should be appreciated that such specific interaction is merely illustrative and that any suitable interaction design may be employed to cause two-dimensional map portion 420 to be collapsed or rendered.
In some implementations, the terminal device 140 can also determine an orientation angle of the terminal device 140, and in response to the orientation angle differing from a direction angle of the destination location 130 relative to the current location by more than a predetermined threshold, the terminal device 140 can also present a first alert in the navigation interface 150 that the current direction of travel of the first object 110 is likely wrong.
Illustratively, taking fig. 4 as an example, the current orientation element 415-1 indicates that the current orientation of the terminal device 140 is appropriate, i.e. that the direction of travel of the first object 110 is accurate. Conversely, if the orientation of the terminal device 140 is the opposite of the orientation shown in fig. 4, the terminal device 140 may present a reminder in the navigation interface 150 that the destination location 130 is directly behind the current orientation of the first object 110, for example, and remind the user to adjust the direction of travel.
In some implementations, the navigation interface 150 may also present point of interest information associated with the live-action image 410, for example. Illustratively, the terminal device 140 may determine at least one point of interest associated with the live-action image 410. FIG. 5 shows a schematic diagram illustrating an example navigation interface, according to further embodiments of the present disclosure. As shown in fig. 5, the terminal device 140 may determine that the point of interest "XX coffee shop" is included in the live-action image 410 based on, for example, map information or visual recognition technology.
Additionally, the terminal device 140 may present information 520 associated with the at least one point of interest at a location in the live-action image 410 corresponding to the at least one point of interest. As shown in fig. 5, the terminal device 140 may present information 520, for example, in an image area corresponding to the point of interest "XX coffee shop". It should be understood that the specific content of information 520 in fig. 5 is merely illustrative, and any suitable information related to the point of interest may be presented as desired.
It should be appreciated that in the context of smart glasses, the information 520 may be overlappingly presented in the real field of view of the first object 110 without the need to capture or present a digital image.
In this manner, embodiments of the present disclosure are also able to provide the first object 110 with relevant point of interest information during travel to the destination location 130. For example, when the fusion time may still be relatively long, this can help the first object 110 to find some desired points of interest more conveniently, e.g., the second object 120 may be waited for to reach the fusion location at a coffee shop.
In some implementations, the terminal device 140 may also present the conversation region 510 in association with the live-action image 410 in the navigation interface 150, for example. In some implementations, the session region 510 may present, for example, a message from a second end device associated with the second object 120. As shown in fig. 5, the conversation area 510 can present information sent from a driver driving a vehicle through a terminal device, for example.
Alternatively or additionally, the session area 510 may also generate a message to be sent to the second terminal device, for example. For example, the first object 110 may reply to a message from the second end device through the session region 510 or actively send a message to the second end device.
In this way, the merging of different objects to the destination location can be promoted more effectively, thereby improving the efficiency of object merging.
In some implementations, to avoid that the conversation area 510 may affect the display of the live-action image 410, the conversation area 510 may, for example, be presented only under certain conditions. In some implementations, when end device 140 receives a session bring-up operation, end device 140 may present session region 510 in navigation interface 150. Alternatively or additionally, when the terminal device 140 receives a message from a second terminal device, the terminal device 140 may also automatically present the conversation area 510 in the navigation interface 150 to present the received message.
In some implementations, the conversation area 510 can also be collapsed, for example, automatically or manually. Illustratively, when the terminal device 140 does not receive a user operation with respect to the conversation area 510 within a predetermined period of time, and does not receive a message from a second terminal device, the conversation area 510 may be automatically collapsed to avoid interfering with the viewing of the navigation interface 150 by the first object 110.
FIG. 6 illustrates a schematic diagram showing an example navigation interface, according to further embodiments of the present disclosure. As shown in fig. 6, when the distance between the current location of the first object 110 and the destination location 130 is less than the predetermined threshold, the terminal device 140 may, for example, present a reminder that the first object has reached the vicinity of the destination location, such as the text "you have reached the vicinity of the destination, please patiently wait. "
In some implementations, the terminal device 140 may also present object information 610 associated with the second object 120 in the navigation interface 150 when it is determined that the distance between the current location and the destination location 130 is less than the predetermined threshold. As shown in fig. 6, the object information 610 may include, for example, appearance information for describing appearance features of the second object.
Taking the second object 120 as a vehicle as an example, the appearance information may include, for example, the license plate number, color, model number, or the like of the vehicle. It should be understood that the terminal device 140 may obtain such appearance information based on, for example, a travel order for the first object.
Taking the second object 120 as an example of a user, the appearance information may include, for example, the gender, height, clothing, etc. of other users. Such profile information may be actively uploaded to the server by other users for acquisition by the terminal device 140, or sent directly to the terminal device 140, for example.
In some implementations, the object information may also include, for example, state information that describes a state related to the second object 120. Examples of status information include, but are not limited to: whether the second object 120 has arrived, the current location of the second object 120, the distance of the second object 120 from the destination location 130, the time at which the second object 120 is expected to arrive at the destination location 130, or any combination thereof.
By providing the object information of the second object 120, the efficiency of fusion of the first object 110 and the second object 120 can be further improved.
In some implementations, the terminal device 140 may also utilize the appearance information to determine whether the second object 120 is present in the live-action image 410. In some implementations, the terminal device 140 may perform object detection for the live-action image 410 based on the appearance information to determine whether the second object 120 is present in the live-action image 410 upon determining that the distance of the second object 120 from the destination location 130 is less than the predetermined threshold. In this way, the terminal device 140 can be prevented from performing invalid calculations.
In some implementations, upon determining that the second object 120 is present in the live-action image 410, the terminal device 140 may present a third visual element in a corresponding area of the live-action image 410, wherein the third visual element indicates a position of the second object 120 in the live-action image 410.
Illustratively, when the terminal device 140 may identify that the real-world image 410 includes the corresponding second object 120 (e.g., a vehicle) based on, for example, appearance information (e.g., one or more of a license plate number, a vehicle type, and a color), the terminal device 140 may generate, for example, a line around a boundary of the second object 120 in the real-world image 410 to indicate a location of the second object 120.
Alternatively, the terminal device 140 may also indicate the current position of the second object 120 using a visual element 620, such as a pin, to guide the first object 110 to merge with the second object 120.
In some implementations, terminal device 140 may also present a street view picture associated with destination location 130 in navigation interface 150. In particular, as the first object 110 approaches the destination location 130, the first object 110 may also desire to be able to accurately locate the destination location 130 in the live-action image 410.
In some implementations, when the current location of the first object is less than a predetermined threshold distance from the destination location 130, the terminal device 140 can automatically present a street view picture with the destination location 130 in the navigation interface 150.
In still other implementations, terminal device 140 may also automatically present a street view picture with destination location 130 in navigation interface 150 when a predetermined operation to view the street view picture is received on navigation interface 150.
In some implementations, the street view picture may be overlappingly presented over a predetermined area on the live-view image 410, where the predetermined area is determined based on the destination location. For example, the terminal device 140 may determine a predetermined region in the image coordinate system in which the street view picture is displayed, based on the direction and distance of the destination location from the current location. In some implementations, the size of the predetermined area may also be dynamically varied, for example, according to the distance of the current location from the destination location.
In this way, the first object 110 can be more effectively assisted in locating the destination location, or locating environmental features surrounding the destination location.
In some implementations, in response to determining that the current location of the second object 120 is less than the predetermined threshold from the destination location 130, the terminal device 140 can also present an alert in the navigation interface that the second object 120 has reached the destination location 130.
Illustratively, the terminal device 140 may generate a reminder in the navigation interface 150 to inform that the vehicle has arrived, for example, when the pickup vehicle has arrived at the pickup location, but the user is a certain distance from the destination location. This can alert the user to speed up properly, thereby avoiding the vehicle waiting too long.
Based on the navigation process discussed above, embodiments of the present disclosure can provide more intuitive navigation for objects to be merged using live-action images around the objects, thereby improving the efficiency of merging between the objects.
Example apparatus and devices
Embodiments of the present disclosure also provide corresponding apparatuses for implementing the above methods or processes. Fig. 7 shows a schematic block diagram of a navigation device 700 according to some embodiments of the present disclosure.
As shown in fig. 7, the apparatus 700 includes a destination location acquisition module 710 configured to acquire a destination location associated with the first object, wherein the destination location is further associated with the second object, the destination location indicating a predetermined location where the first object can meet the second object. The apparatus 700 further comprises a current location determining module 720 configured to determine a current location of the first object. Furthermore, the apparatus 700 further comprises a navigation module 730 configured to provide a navigation interface comprising a navigation element superimposed on the live-action image associated with the current location, the navigation element being determined based on the destination location and the current location.
In some implementations, the live-action image is acquired by a first terminal device associated with the first object.
In some implementations, the second object includes a vehicle for travel services, and the destination location acquisition module 710 includes: an order resolution module configured to determine a parking location associated with the travel order as the destination location based on the travel order associated with the first object.
In some implementations, the stop location is automatically determined based on the travel order.
In some implementations, the second object is a parked vehicle, and the destination location acquisition module 710 includes: a parking position determination module configured to determine a parking position of the vehicle as the destination position.
In some implementations, the parking position is automatically recorded after the vehicle is completely parked.
In some implementations, the first object and the second object include a set of users to be merged, and the destination location is a merge location specified for the set of users.
In some implementations, the navigation interface provides walking navigation or cycling navigation to the destination for the first object.
In some implementations, the current location determination module 720 includes: a positioning information acquisition module configured to acquire positioning information of a first terminal device associated with a first object, the positioning information including inertial navigation position estimation information and auxiliary positioning information, the auxiliary positioning information including at least one of: GPS positioning information, visual positioning information, or prediction information of a positioning model, wherein the positioning model is a machine learning model configured to determine an expected position or an expected velocity of the first object based on inertial navigation sensor data of the first terminal device; and a correction module configured to correct the inertial navigation dead reckoning information based on the auxiliary positioning information to determine a current position of the first object.
In some implementations, the positioning model is trained based on training inertial navigation sensor data acquired by a first device and corresponding true position information determined by a second device having a positioning accuracy above a predetermined threshold, the first and second devices being physically coupled for synchronous motion.
In some implementations, the correction module includes: an adjustment module configured to adjust a state equation used to determine inertial navigation dead reckoning information with the auxiliary positioning information as a constraint.
In some implementations, the inertial navigation position estimate information is determined based on a kalman filter.
In some implementations, the navigation element includes an orientation element to indicate a direction of the destination location relative to the current location.
In some implementations, the navigation interface further includes a two-dimensional map portion including a first visual element corresponding to the current location and the destination location of the first object.
In some implementations, the two-dimensional map portion also includes a second visual element for indicating a direction of the destination location relative to the current location.
In some implementations, the two-dimensional map portion can be presented or collapsed in the navigation interface in response to a predetermined operation on the navigation interface.
In some implementations, the apparatus 700 further includes: an angle determination module configured to determine an orientation angle of the first terminal device; and a first alert module configured to present a first alert in the navigation interface that the current direction of travel of the first object may be wrong in response to the difference between the heading angle and the directional angle of the destination location relative to the current location being greater than a predetermined threshold.
In some implementations, the apparatus 700 further includes: a point of interest determination module configured to determine at least one point of interest associated with the live-action image; and a point of interest information providing module configured to present information associated with the at least one point of interest at a location in the live-action image corresponding to the at least one point of interest.
In some implementations, the apparatus 700 further includes: a street view picture presentation module configured to present a street view picture associated with the destination location in response to at least one of: the distance between the current position of the first object and the destination position is less than a predetermined threshold; or receiving a preset operation for viewing the street view picture on the navigation interface.
In some implementations, the street view picture is overlappingly presented over a predetermined area on the live-view image, the predetermined area being determined based on the destination location.
In some implementations, the apparatus 700 further includes: an object information providing module configured to present object information associated with the second object in the navigation interface in response to determining that the distance between the current location and the destination location is less than a predetermined threshold.
In some implementations, the object information includes at least one of: appearance information for describing appearance characteristics of the second object; and status information describing at least one of: whether the second object has arrived, a current location of the second object, a distance of the second object from the destination location, or a time at which the second object is expected to arrive at the destination location.
In some implementations, the apparatus 700 further includes: an appearance information acquisition module configured to acquire appearance information associated with the second object; and an identification module configured to determine whether the second object exists in the live-action image based on the appearance information.
In some implementations, the identification module includes: an object detection module configured to perform object detection for the live-action image based on the appearance information to determine whether the second object is present in the live-action image in response to determining that the distance of the second object from the destination location is less than a predetermined threshold.
In some implementations, the apparatus 700 further includes: an object hinting module configured to, in response to determining that the second object is present in the live-action image, present a third visual element in a corresponding region of the live-action image, the third visual element indicating a position of the second object in the live-action image.
In some implementations, the apparatus 700 further includes: a second reminder module configured to present a second reminder that the first object has reached the vicinity of the destination location in response to determining that the current location is less than the predetermined threshold from the destination location.
In some implementations, the apparatus 700 further includes: a conversation presentation module configured to present a conversation region in association with the live-action image, the conversation region configured to: presenting a message from a second terminal device associated with a second object; or generate a message to be sent to the second terminal device.
In some implementations, the conversation region is presented in response to at least one of: receiving a conversation awakening operation; or a message from a second terminal device.
In some implementations, the conversation area is automatically collapsed in response to at least one of: no user operation for the session area is received for a predetermined period of time, and no message is received from the second terminal device.
In some implementations, the apparatus 700 further includes: a third reminder module configured to present a third reminder in the navigation interface that the second object has reached the destination location in response to determining that the current location of the second object is less than the predetermined threshold from the destination location.
The elements included in apparatus 700 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In some embodiments, one or more of the units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the elements in apparatus 700 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standards (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and so forth.
Fig. 8 illustrates a block diagram of a computing device/server 800 in which one or more embodiments of the disclosure may be implemented. It should be understood that the computing device/server 800 illustrated in fig. 8 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein.
As shown in fig. 8, computing device/server 800 is in the form of a general purpose computing device. Components of computing device/server 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860. The processing unit 810 may be a real or virtual processor and can perform various processes according to programs stored in the memory 820. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capability of computing device/server 800.
Computing device/server 800 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device/server 800 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 820 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage device 830 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within computing device/server 800.
Computing device/server 800 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 820 may include a computer program product 825 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
Communication unit 840 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of computing device/server 800 may be implemented in a single computing cluster or multiple computing machines capable of communicating over a communications connection. Thus, computing device/server 800 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 850 may be one or more input devices such as a mouse, keyboard, trackball, or the like. The output device(s) 860 may be one or more output devices such as a display, speakers, printer, or the like. Computing device/server 800 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as desired, through communication unit 840, with one or more devices that enable a user to interact with computing device/server 800, or with any device (e.g., network card, modem, etc.) that enables computing device/server 800 to communicate with one or more other computing devices. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the above-described method.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the implementations disclosed herein.

Claims (14)

1. A navigation method, comprising:
obtaining a destination location associated with a first object, wherein the destination location is further associated with a second object, the destination location indicating a predetermined location where the first object can meet the second object;
determining a current location of the first object; and
providing a navigation interface comprising a navigation element superimposed on a live-action image associated with the current location, the navigation element determined based on the destination location and the current location.
2. The method of claim 1, wherein the second object comprises a vehicle for travel services, and obtaining the destination location comprises:
determining, based on the travel order associated with the first object, a stop location associated with the travel order as the destination location.
3. The method of claim 1, wherein determining the current location of the first object comprises:
obtaining positioning information of a first terminal device associated with the first object, the positioning information including inertial navigation position estimation information and auxiliary positioning information, the auxiliary positioning information including at least one of: GPS positioning information, visual positioning information, or prediction information of a positioning model,
wherein the positioning model is a machine learning model configured to determine an expected position or an expected velocity of the first object based on inertial navigation sensor data of the first terminal device; and
based on the auxiliary positioning information, correcting the inertial navigation dead reckoning information to determine the current position of the first object.
4. The method of claim 3, wherein the positioning model is trained based on training inertial navigation sensor data acquired by a first device and corresponding true position information determined by a second device having a positioning accuracy above a predetermined threshold, the first and second devices being physically coupled for synchronous motion.
5. The method of claim 3, wherein correcting the inertial navigation dead reckoning information based on the auxiliary positioning information comprises:
and adjusting a state equation for determining the inertial navigation position calculation information by taking the auxiliary positioning information as a constraint.
6. The method of claim 1, wherein the navigation element comprises an orientation element for indicating a direction of the destination location relative to the current location.
7. The method of claim 1, further comprising:
in response to determining that the distance between the current location and the destination location is less than a predetermined threshold, presenting object information associated with the second object in the navigation interface.
8. The method of claim 7, wherein the object information comprises at least one of:
appearance information for describing appearance features of the second object; and
status information describing at least one of: whether the second object has arrived, a current location of the second object, a distance of the second object from the destination location, or a time at which the second object is expected to arrive at the destination location.
9. The method of claim 1, further comprising:
obtaining appearance information associated with the second object; and
determining whether the second object exists in the live-action image based on the appearance information.
10. The method of claim 9, wherein determining whether the second object is present in the live-action image comprises:
in response to determining that the second object is less than a predetermined threshold from the destination location, performing object detection for the live-action image based on the appearance information to determine whether the second object is present in the live-action image.
11. A navigation device, comprising:
a destination location acquisition module configured to acquire a destination location associated with a first object, wherein the destination location is further associated with a second object, the destination location indicating a predetermined location where the first object can meet the second object;
a current location determination module configured to determine a current location of the first object; and
a navigation module configured to provide a navigation interface including a navigation element superimposed on a live-action image associated with the current location, the navigation element determined based on the destination location and the current location.
12. An electronic device, comprising:
a memory and a processor;
wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement the method of any one of claims 1 to 10.
13. A computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement the method of any one of claims 1 to 10.
14. A computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method of any one of claims 1 to 10.
CN202110062452.2A 2021-01-18 2021-01-18 Navigation method and device Pending CN112781601A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110062452.2A CN112781601A (en) 2021-01-18 2021-01-18 Navigation method and device
PCT/CN2022/071005 WO2022152081A1 (en) 2021-01-18 2022-01-10 Navigation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110062452.2A CN112781601A (en) 2021-01-18 2021-01-18 Navigation method and device

Publications (1)

Publication Number Publication Date
CN112781601A true CN112781601A (en) 2021-05-11

Family

ID=75756387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110062452.2A Pending CN112781601A (en) 2021-01-18 2021-01-18 Navigation method and device

Country Status (2)

Country Link
CN (1) CN112781601A (en)
WO (1) WO2022152081A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022152081A1 (en) * 2021-01-18 2022-07-21 北京嘀嘀无限科技发展有限公司 Navigation method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105371860A (en) * 2015-11-17 2016-03-02 广东欧珀移动通信有限公司 Method and terminal for generating navigation route
US20170139578A1 (en) * 2015-11-18 2017-05-18 Samsung Electronics Co., Ltd System and method for 360-degree video navigation
CN106920079A (en) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN108332765A (en) * 2018-01-18 2018-07-27 维沃移动通信有限公司 Share-car traffic path generation method and device
CN109655060A (en) * 2019-02-19 2019-04-19 济南大学 Based on the KF/FIR and LS-SVM INS/UWB Integrated Navigation Algorithm merged and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108088450A (en) * 2016-11-21 2018-05-29 北京嘀嘀无限科技发展有限公司 Air navigation aid and device
CN112781601A (en) * 2021-01-18 2021-05-11 北京嘀嘀无限科技发展有限公司 Navigation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105371860A (en) * 2015-11-17 2016-03-02 广东欧珀移动通信有限公司 Method and terminal for generating navigation route
US20170139578A1 (en) * 2015-11-18 2017-05-18 Samsung Electronics Co., Ltd System and method for 360-degree video navigation
CN106920079A (en) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN108332765A (en) * 2018-01-18 2018-07-27 维沃移动通信有限公司 Share-car traffic path generation method and device
CN109655060A (en) * 2019-02-19 2019-04-19 济南大学 Based on the KF/FIR and LS-SVM INS/UWB Integrated Navigation Algorithm merged and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022152081A1 (en) * 2021-01-18 2022-07-21 北京嘀嘀无限科技发展有限公司 Navigation method and apparatus

Also Published As

Publication number Publication date
WO2022152081A1 (en) 2022-07-21

Similar Documents

Publication Publication Date Title
CN108225348B (en) Map creation and moving entity positioning method and device
US11275447B2 (en) System and method for gesture-based point of interest search
US10145697B2 (en) Dynamic destination navigation system
US10134196B2 (en) Mobile augmented reality system
CN107563267B (en) System and method for providing content in unmanned vehicle
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
US9508199B2 (en) Mobile device communicating with motor vehicle system
EP3224574B1 (en) Street-level guidance via route path
KR102564430B1 (en) Method and device for controlling vehicle, and vehicle
CN109426800B (en) Lane line detection method and device
EP2915139B1 (en) Adaptive scale and gravity estimation
CN110779538B (en) Allocating processing resources across local and cloud-based systems relative to autonomous navigation
KR102390935B1 (en) Pick-up and drop-off location identification for ridesharing and delivery via augmented reality
US8467612B2 (en) System and methods for navigation using corresponding line features
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
US11587442B2 (en) System, program, and method for detecting information on a person from a video of an on-vehicle camera
KR102219843B1 (en) Estimating location method and apparatus for autonomous driving
US9128170B2 (en) Locating mobile devices
CN112945227A (en) Positioning method and device
CN114636414A (en) High definition city map drawing
US9791287B2 (en) Drive assist system, method, and program
WO2022152081A1 (en) Navigation method and apparatus
TW202229818A (en) Lane mapping and localization using periodically-updated anchor frames
US20160298972A1 (en) Travel direction information output apparatus, map matching apparatus, travel direction information output method, and computer readable medium
US11481920B2 (en) Information processing apparatus, server, movable object device, and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination