WO2022152081A1 - 导航方法和装置 - Google Patents

导航方法和装置 Download PDF

Info

Publication number
WO2022152081A1
WO2022152081A1 PCT/CN2022/071005 CN2022071005W WO2022152081A1 WO 2022152081 A1 WO2022152081 A1 WO 2022152081A1 CN 2022071005 W CN2022071005 W CN 2022071005W WO 2022152081 A1 WO2022152081 A1 WO 2022152081A1
Authority
WO
WIPO (PCT)
Prior art keywords
navigation
location
information
destination location
terminal device
Prior art date
Application number
PCT/CN2022/071005
Other languages
English (en)
French (fr)
Inventor
李荣浩
许鹏飞
王亮
徐斌
马朝伟
蔡超
张松
章磊
刘涛
杨涛
胡萌
周康
马利
胡润波
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2022152081A1 publication Critical patent/WO2022152081A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3423Multimodal routing, i.e. combining two or more modes of transportation, where the modes can be any of, e.g. driving, walking, cycling, public transport
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3438Rendez-vous, i.e. searching a destination where several users can meet, and the routes to this destination for these users; Ride sharing, i.e. searching a route such that at least two users can share a vehicle for at least part of the route

Definitions

  • Various implementations of the present disclosure relate to the field of intelligent transportation, and more particularly, to navigation methods, apparatuses, devices, storage media, and program products.
  • Embodiments of the present disclosure provide a solution for navigation.
  • a navigation method includes: obtaining a destination location associated with the first object, wherein the destination location is also associated with the second object, the destination location indicating a predetermined location where the first object can meet the second object; determining the current location of the first object a location; and providing a navigation interface that includes navigation elements superimposed on a live image associated with the current location, the navigation elements being determined based on the destination location and the current location.
  • a navigation device configured to include a destination location acquisition module configured to acquire a destination location associated with a first object, wherein the destination location is also associated with a second object, the destination location indicating that the first object can converge with the second object a predetermined location; a current location determination module configured to determine the current location of the first object; and a navigation module configured to provide a navigation interface including navigation elements superimposed on a live image associated with the current location, the navigation elements is determined based on the destination location and the current location.
  • an electronic device comprising one or more processors and a memory, wherein the memory is used to store computer-executable instructions that are executed by the one or more processors to implement the The method of the first aspect of the present disclosure.
  • a computer-readable storage medium having computer-executable instructions stored thereon, wherein the computer-executable instructions, when executed by a processor, implement the method according to the first aspect of the present disclosure.
  • a computer program product comprising computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, implement the method according to the first aspect of the present disclosure.
  • FIGS. 1A-1C show schematic diagrams of example environments in which some embodiments of the present disclosure can be implemented
  • FIG. 2 illustrates a flow diagram of an example navigation process in accordance with some embodiments of the present disclosure
  • FIG. 3 illustrates a flow diagram of an example process for determining a current location according to some embodiments of the present disclosure
  • FIG. 4 shows a schematic diagram of an example navigation interface in accordance with some embodiments of the present disclosure
  • FIG. 5 shows a schematic diagram of an example navigation interface according to further embodiments of the present disclosure.
  • FIG. 6 shows a schematic diagram of an example navigation interface in accordance with further embodiments of the present disclosure.
  • FIG. 7 shows a schematic structural block diagram of a navigation device according to some embodiments of the present disclosure.
  • FIG. 8 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
  • the term “comprising” and the like should be understood as open-ended inclusion, ie, “including but not limited to”.
  • the term “based on” should be understood as “based at least in part on”.
  • the terms “one embodiment” or “the embodiment” should be understood to mean “at least one embodiment”.
  • the terms “first”, “second”, etc. may refer to different or the same objects. Other explicit and implicit definitions may also be included below.
  • embodiments of the present disclosure propose a navigation scheme.
  • a destination location associated with the first object is obtained, wherein the destination location is also associated with the second object, and the destination location indicates a predetermined location where the first object can meet the second object.
  • a current location of the first object is determined, and a navigation interface is provided, wherein the navigation interface includes navigation elements superimposed on a live-action image associated with the current location, and the navigation elements are based on the destination location and the current location. definite.
  • the real-life images around the objects can be used to provide more intuitive navigation for the objects to be merged, thereby improving the efficiency of merging between the objects.
  • FIGS. 1A-1C schematic diagrams of environments in which embodiments of the present disclosure may be implemented are schematically shown.
  • FIG. 1A shows a first example environment 100A according to an embodiment of the present disclosure.
  • the first object 110 may include a user and the second object 120 may include a vehicle for servicing a travel trip associated with the first object 110 .
  • the first object 110 may be associated with a terminal device 140 (also referred to as a first terminal device).
  • the terminal device 140 may be, for example, a mobile device with positioning capability, such as a smart phone, a tablet computer, a personal digital assistant PDA, or a smart wearable device (eg, a smart watch, smart glasses, smart bracelet, etc.).
  • terminal device 140 may determine that first object 110 needs to travel to destination location 130 to meet second object 120.
  • the destination location 130 may be determined based on, for example, a travel order for the first object 110 .
  • the terminal device 140 may acquire the travel order of the first object 110 , and acquire the parking location specified by the travel platform, such as a pick-up point, and use it as the destination location 130 .
  • the stop location may be automatically determined by the travel platform based on the origin location in the travel order, for example. For example, parking locations near the origin of the travel order that do not fall within the restricted parking range may be automatically provided. In such a case, such a destination location 130 is usually represented by absolute coordinate values, which makes it difficult for the first object to accurately understand the exact location of the destination location 130 .
  • the vehicle may not be picked up.
  • the second object 120 may be a vehicle for dispatching goods, which merges with the first object 110 to load the goods that the first object 110 needs to transport, and transport the activity to the end point of the designated itinerary.
  • the terminal device 140 may provide a navigation interface 150 .
  • the terminal device 140 may acquire at least one navigation element for guiding the first object to travel to the destination location 130 .
  • Such navigation elements may be determined, for example, by a suitable computing device (eg, terminal device 140 or a server providing navigation services) based on the current location of the first object 110 and the destination location 130 . It should be understood that in the case where the navigation element is directly determined by the terminal device 140, this can enable the terminal device 140 to ensure the normal provision of navigation even in the absence of a communication network.
  • the terminal device 140 may superimpose such navigation elements on the live image associated with the current location.
  • the live image may be acquired by an image capture device of the terminal device 140 .
  • the first object 110 can hold the terminal device 140, such as a smart phone, for example.
  • the terminal device 140 can keep the image capturing device turned on during the traveling process of the first object 110 to capture a real-life image of the current position, and present the navigation interface 150 through the display device.
  • Such a navigation interface 150 may include a captured live image and navigation elements displayed superimposed on the live image.
  • the terminal device 140 may include, for example, smart glasses or other smart head mounted devices.
  • the terminal device 140 may acquire at least one navigation element, and use the smart glasses to superimpose the at least one navigation element on the corresponding real image in real time.
  • the real-life image is an environment image that is actually seen by the first object 110 , rather than a digital image captured by the terminal device 140 .
  • the first object 110 may travel to the destination location 130 by a suitable means.
  • the first object 110 may travel to the destination location 130 by walking.
  • the first object 110 may also travel to the destination location 130 by riding.
  • the embodiments of the present disclosure can efficiently guide the user to a designated service location in the travel order, such as a pick-up location, a pickup location, etc., thereby improving the convergence efficiency of the user and the vehicle serving the order.
  • FIG. 1B illustrates a second example environment 100B according to another embodiment of the present disclosure.
  • the first object 110 and the second object 120 may comprise a group of users to be merged.
  • the first object 110 may be associated with the terminal device 140 .
  • the terminal device 140 may be, for example, a mobile device with positioning capability, such as a smart phone, a tablet computer, a personal digital assistant PDA, or a smart wearable device (eg, a smart watch, smart glasses, smart bracelet, etc.).
  • the terminal device 140 may determine that the first object 110 needs to travel to the destination location 130 to meet the second object 120 .
  • the destination location 130 may be a meeting point automatically determined by the system for the group of users. For example, the system may automatically determine a meeting point as the destination location 130 based on the current location of each user in the group of users.
  • the destination location 130 may be specified by a user of the group of users, for example. In such a case, some users may be unfamiliar with the meeting place specified by the system or other meeting places specified by the user, which makes it difficult for the first object 110 to accurately understand the exact location of the destination location 130.
  • the terminal device 140 may provide a navigation interface 150 .
  • the terminal device 140 may acquire at least one navigation element for guiding the first object to travel to the destination location 130 .
  • Such navigation elements may be determined, for example, by a suitable computing device (eg, terminal device 140 or a server providing navigation services) based on the current location of the first object 110 and the destination location 130 . It should be understood that in the case where the navigation element is directly determined by the terminal device 140, this can enable the terminal device 140 to ensure the normal provision of navigation even in the absence of a communication network.
  • the terminal device 140 may superimpose such navigation elements on the live image associated with the current location.
  • the live image may be acquired by an image capture device of the terminal device 140 .
  • the first object 110 can hold the terminal device 140, such as a smart phone, for example.
  • the terminal device 140 can keep the image capturing device turned on during the traveling process of the first object 110 to capture a real-life image of the current position, and present the navigation interface 150 through the display device.
  • Such a navigation interface 150 may include a captured live image and navigation elements displayed superimposed on the live image.
  • the terminal device 140 may include, for example, smart glasses or other smart head mounted devices.
  • the terminal device 140 may acquire at least one navigation element, and use the smart glasses to superimpose the at least one navigation element on the corresponding real image in real time.
  • the real-life image is an environment image that is actually seen by the first object 110 , rather than a digital image captured by the terminal device 140 .
  • the first object 110 may travel to the destination location 130 by a suitable means.
  • the first object 110 may travel to the destination location 130 by walking.
  • the first object 110 may also travel to the destination location 130 by riding.
  • the embodiments of the present disclosure can efficiently guide users to a designated meeting place of a group of users, thereby improving the efficiency of meeting between users.
  • FIG. 1C illustrates a third example environment 100C according to an embodiment of the present disclosure.
  • the first object 110 may include a user
  • the second object 120 is a vehicle parked at a specific location, eg, a car, an electric vehicle, or a bicycle.
  • the first object 110 may be associated with the terminal device 140 .
  • the terminal device 140 may be, for example, a mobile device with positioning capability, such as a smart phone, a tablet computer, a personal digital assistant PDA, or a smart wearable device (eg, a smart watch, smart glasses, smart bracelet, etc.).
  • the terminal device 140 may determine that the first object 110 needs to travel to the destination location 130 to meet the second object 120 .
  • the terminal device 140 may obtain the parking location of the second object 120 from other computing devices as the destination location. For example, after a specific user (which may be the first object 110, or a user different from the first object 110) parks the vehicle at a specific parking location, the terminal device associated with the specific user (for example, the mobile device of the specific user) (or the vehicle terminal) can automatically determine the current position as the parking position, and upload the position to the server, for example, or directly send it to the terminal device 140 .
  • a specific user which may be the first object 110, or a user different from the first object 110
  • the terminal device associated with the specific user for example, the mobile device of the specific user
  • the vehicle terminal can automatically determine the current position as the parking position, and upload the position to the server, for example, or directly send it to the terminal device 140 .
  • the terminal device 140 may obtain the parking position from a server or receive the parking position from a terminal device associated with a specific user, for example.
  • the terminal device 140 may also automatically record the parking position during the process of participating in the parking process by the first object 110 (eg, as a driver or passenger).
  • the terminal device 140 can automatically acquire the parking location of the second object 120 as the destination location 130 .
  • the vehicle is parked at a location unfamiliar to the user, it is often difficult for the user to efficiently travel to the parking location of the vehicle through memory. This makes the user often spend a lot of ineffective time searching for the vehicle.
  • the terminal device 140 may provide a navigation interface 150 to guide the first object 110 to travel to the destination location 130 , that is, the parking location of the second object 120 .
  • the terminal device 140 may acquire at least one navigation element for guiding the first object to travel to the destination location 130 .
  • Such navigation elements may be determined, for example, by a suitable computing device (eg, terminal device 140 or a server providing navigation services) based on the current location of the first object 110 and the destination location 130 . It should be understood that in the case where the navigation element is directly determined by the terminal device 140, this can enable the terminal device 140 to ensure the normal provision of navigation even in the absence of a communication network.
  • the terminal device 140 may superimpose such navigation elements on the live image associated with the current location.
  • the live image may be acquired by an image capture device of the terminal device 140 .
  • the first object 110 can hold the terminal device 140, such as a smart phone, for example.
  • the terminal device 140 can keep the image capturing device turned on during the traveling process of the first object 110 to capture a real-life image of the current position, and present the navigation interface 150 through the display device.
  • Such a navigation interface 150 may include a captured live image and navigation elements displayed superimposed on the live image.
  • the terminal device 140 may include, for example, smart glasses or other smart head mounted devices.
  • the terminal device 140 may acquire at least one navigation element, and use the smart glasses to superimpose the at least one navigation element on the corresponding real image in real time.
  • the real-life image is an environment image that is actually seen by the first object 110 , rather than a digital image captured by the terminal device 140 .
  • the first object 110 may travel to the destination location 130 by a suitable means.
  • the first object 110 may travel to the destination location 130 by walking.
  • the first object 110 may also travel to the destination location 130 by riding.
  • the embodiments of the present disclosure can help the user to efficiently find the parked vehicle, thereby reducing the time and cost that the user needs to spend in this process.
  • Example scenarios in which embodiments of the present disclosure can be implemented are described above in conjunction with FIGS. 1A-1C . It should be understood that the navigation scheme according to the present disclosure may also be used in other suitable meeting scenarios without departing from the spirit of the present disclosure.
  • FIG. 2 shows a schematic diagram of a navigation process 200 in accordance with some embodiments of the present disclosure.
  • Process 200 may be performed, for example, at terminal device 140 shown in FIG. 1 . It should be understood that process 200 may also include blocks not shown and/or blocks shown may be omitted. The scope of the present disclosure is not limited in this regard.
  • the terminal device 140 obtains the destination location 130 associated with the first object 110, wherein the destination location 130 is also associated with the second object 120, the destination location 130 indicating the first object 110
  • the predetermined location of the second object 120 can be converged.
  • the destination location 130 may be a parking location for a vehicle, a meeting location for a group of users, a parking location for a vehicle, or the like. It should be understood that in the present disclosure, the destination location 130 is used for merging between objects, which is usually specified automatically by the system, thereby making it difficult for the user to travel to the destination location 130 autonomously and accurately.
  • the terminal device 140 may determine the destination location 130 in an appropriate manner, and the description will not be repeated here.
  • the destination location may be represented, for example, using coordinates consisting of latitude and longitude.
  • the terminal device 140 determines the current location of the first object 110 .
  • the terminal device 140 may periodically determine the current location of the first object, for example, at a frequency of 1 Hz, thereby enabling more real-time and accurate navigation for the first object.
  • the terminal device 140 may determine the current location of the first object 110 using an appropriate positioning technique. For example, in an outdoor walking or cycling navigation scenario, the terminal device 140 may determine the current position, for example, with GPS positioning technology and inertial navigation positioning technology. In an indoor navigation scenario, the terminal device 140 may also acquire the current position based on visual positioning or UWB positioning technology, for example.
  • the terminal device 140 may also obtain a more accurate current position by means of fusion filtering. The specific process of block 204 will be described below with reference to FIG. 3 .
  • the terminal device 140 may obtain positioning information of the terminal device 140 associated with the first object 110 , where the positioning information includes inertial navigation position estimation information and auxiliary positioning information.
  • the assisted positioning information may include at least one of the following: GPS positioning information, visual positioning information, or prediction information of a positioning model.
  • the positioning model may be a machine learning model and configured to determine the expected position or expected velocity of the first object 110 based on inertial navigation sensor data of the terminal device 140 .
  • the terminal device 140 may determine inertial navigation reckoning information based on PDR (pedestrian dead reckoning) technology. Additionally, the terminal device 140 may also acquire GPS positioning information or visual positioning information, and use the GPS positioning information or the visual positioning information to correct the position determined by the PDR technology.
  • PDR pedestrian dead reckoning
  • the terminal device 140 may also utilize the positioning model to obtain predictive information.
  • the positioning model can acquire inertial navigation sensor data, such as gyroscope data and/or accelerometer data, of the terminal device 140 in the past predetermined time period, and can predict the location of the first object based on the inertial navigation sensor data. The position at this moment (if the initial position is known) or the velocity.
  • the positioning model is trained based on training inertial navigation sensor data and corresponding ground-truth position information, wherein the training inertial navigation sensor data is obtained by the first device, and the ground-truth position information is high positioning accuracy
  • the first device and the second device are physically coupled to move synchronously, as determined by the second device at a predetermined threshold, thereby establishing an association between the acquired training inertial navigation sensor data and the ground-truth position information.
  • a first device having an inertial navigation sensor and a second device having a more accurate positioning capability may be used to acquire data for training.
  • the first device and the second device may be physically coupled such that both are associated with the same physical location.
  • the first device and the second device may be physically bound.
  • the first device and the second device perform predetermined movements and acquire inertial navigation sensor data for the first device and positioning data for the second device.
  • the training data may be formed by appropriate clock alignment techniques such that the inertial navigation sensor data is correlated to the corresponding positioning data.
  • Such training data may be input into an appropriate machine learning model (including, but not limited to, deep neural networks, convolutional neural networks, support vector machine models or decision tree models, etc.).
  • the training inertial navigation sensor data for a predetermined period of time can be used as the input feature of the model, and the true value position information obtained by the second device and/or the speed information determined based on the true value position information can be used as the reference true value of the model ( ground-truth), so that the distance between the position and/or velocity predicted by the machine learning model based on the input features and the ground truth is less than a predetermined threshold.
  • a prediction model that can predict the current speed and/or the current position based on inertial sensor data within a certain period of time can be obtained.
  • the terminal device 140 may correct the inertial navigation position reckoning information based on the auxiliary positioning information to determine the current position of the first object 110 .
  • the terminal device 140 may adjust the state equation used to determine the inertial navigation position estimation information with the auxiliary positioning information as a constraint.
  • the inertial navigation position reckoning information is determined based on a Kalman filter.
  • the terminal device 140 may, for example, determine the correction amount for the inertial navigation position estimation information based on the difference between the auxiliary positioning information and the inertial navigation position estimation information.
  • the embodiments of the present disclosure can eliminate interference caused by GPS signal loss or drift, thereby improving positioning accuracy.
  • the terminal device 140 provides the navigation interface 150, wherein the navigation interface 150 includes navigation elements superimposed on the live-action image associated with the current location, the navigation elements being determined based on the destination location 130 and the current location of.
  • FIG. 4 illustrates an example navigation interface 150 according to an embodiment of the present disclosure.
  • the navigation interface 150 may, for example, be presented by a display device of the terminal device 140 . It should be understood that in the context of smart eyes or smart head mounted devices, the presentation of the navigation interface 150 may be different.
  • the navigation interface 150 may include a live image 410 .
  • the live image 410 may be captured by the terminal device 140, for example.
  • navigation interface 150 also includes navigation elements 415 - 1 and 415 - 2 overlaid on live action image 410 .
  • the navigation element 415-1 may be, for example, a directional element for indicating the direction of the destination location 130 relative to the current location. This may reduce the computational effort of end device 140 by presenting directional elements rather than generating a completed navigation path. Furthermore, considering that in some scenarios, such as travel scenarios, the first object 110 is usually relatively small from the destination location 130 , the first object 110 can already be effectively directed to travel to the destination location 130 through the directional element.
  • the navigation element 415-2 may include, for example, a pass-through text element to, for example, indicate the distance of the destination location 130 from the current location. This can enable the first subject 110 to better estimate the time required to travel to the destination location 130 .
  • the navigation interface 150 may also include other suitable navigation elements, for example, such navigation elements may also be determined based on a real-time path from the current location to the destination location 130, for example.
  • the navigation interface 150 may include a path navigation element to guide the first object 110 along the path to the destination location 130 .
  • the navigation interface 150 may also include a two-dimensional map portion 420 , wherein the two-dimensional map portion 420 includes a visual element 425 - 1 corresponding to the current location of the first object 110 and the destination location 130 and 425-2.
  • the two-dimensional map portion 420 further includes, for example, a second visual element 430 for indicating the direction of the destination location 130 relative to the current location.
  • the embodiments of the present disclosure can enable the first object to intuitively understand the route it needs to take, thereby improving the accuracy of navigation and the friendliness of user interaction.
  • the two-dimensional map portion 420 can be presented or collapsed in the navigation interface 150 , for example, in response to a predetermined operation on the navigation interface 150 .
  • the two-dimensional map part 420 may be in the presentation state by default, and the first object 110 may, for example, click the collapse control 440 to collapse the two-dimensional map part 420, thereby leaving a larger area for presenting the real-life image 410. . Then, according to a specific operation of the first object 110 (eg, swiping up from the bottom), the two-digit map portion 420 can be re-rendered.
  • the two-dimensional map part 420 may be in a collapsed state by default, and the first object 110 may perform a predetermined operation (eg, slide up from the bottom) to cause the two-dimensional map part 420 to be presented. It should be understood that such a specific interaction manner is merely illustrative, and any appropriate interaction design may be adopted to cause the two-dimensional map portion 420 to be collapsed or presented.
  • the terminal device 140 can also determine the heading angle of the terminal device 140 , and in response to the difference between the heading angle and the heading angle of the destination location 130 relative to the current position being greater than a predetermined threshold, the terminal device 140 can also display the heading angle on the navigation interface 150 . A first reminder that the current travel direction of the first object 110 may be wrong is presented in .
  • the current orientation element 415-1 indicates that the current orientation of the terminal device 140 is appropriate, that is, the traveling direction of the first object 110 is accurate.
  • the terminal device 140 may, for example, present a reminder in the navigation interface 150 that the destination location 130 is directly behind the current orientation of the first object 110 , And remind the user to adjust the direction of travel.
  • the navigation interface 150 may also present point-of-interest information associated with the live-action image 410, for example.
  • the terminal device 140 may determine at least one point of interest associated with the live image 410 .
  • FIG. 5 shows a schematic diagram illustrating an example navigation interface according to further embodiments of the present disclosure. As shown in FIG. 5 , the terminal device 140 may, for example, determine that the point of interest "XX coffee shop" is included in the real image 410 based on map information or visual recognition technology.
  • the terminal device 140 may present information 520 associated with the at least one point of interest at a location in the live image 410 corresponding to the at least one point of interest. As shown in FIG. 5 , the terminal device 140 may, for example, present information 520 in an image area corresponding to the point of interest "XX coffee shop". It should be understood that the specific content of the information 520 in FIG. 5 is only illustrative, and any appropriate information related to the point of interest may be presented as required.
  • the information 520 may be presented superimposed in the real field of view of the first object 110 without the need to capture or present a digital image.
  • the embodiments of the present disclosure can also provide the first object 110 with relevant point-of-interest information during the journey to the destination location 130 .
  • this can help the first object 110 to find some desired points of interest more conveniently, for example, waiting for the second object 120 to reach the meeting place at a coffee shop.
  • end device 140 may also present session area 510 in association with live image 410 in navigation interface 150, for example.
  • the conversation area 510 may present messages from a second terminal device associated with the second object 120, for example. As shown in FIG. 5 , the conversation area 510 may, for example, present information sent from a driver driving a vehicle through a terminal device.
  • the conversation area 510 can also generate a message to the second terminal, for example.
  • the first object 110 may reply to a message from the second terminal device through the conversation area 510, or actively send a message to the second terminal device.
  • the conversation area 510 may be presented only under certain conditions, for example.
  • end device 140 may present session area 510 in navigation interface 150 .
  • the terminal device 140 may also automatically present the conversation area 510 in the navigation interface 150 to present the received message.
  • the conversation area 510 may also be automatically or manually retracted, for example.
  • the conversation area 510 may be automatically retracted to avoid interference View of the navigation interface 150 by the first object 110 .
  • FIG. 6 shows a schematic diagram illustrating an example navigation interface according to further embodiments of the present disclosure.
  • the terminal device 140 may, for example, present a reminder that the first object has reached the vicinity of the destination location, such as the text “You have Please be patient when you arrive near your destination.”
  • the terminal device 140 may also present object information 610 associated with the second object 120 in the navigation interface 150 when it is determined that the distance between the current location and the destination location 130 is less than a predetermined threshold.
  • the object information 610 may include, for example, appearance information for describing appearance features of the second object.
  • the appearance information may include, for example, the license plate number, color, or model of the vehicle. It should be understood that the terminal device 140 may acquire such appearance information based on the travel order of the first object, for example.
  • the appearance information may include, for example, gender, height or clothing of other users.
  • Such appearance information may, for example, be actively uploaded to the server by other users to be acquired by the terminal device 140 , or directly sent to the terminal device 140 .
  • the object information may also include, for example, state information, which is used to describe the state related to the second object 120 .
  • state information which is used to describe the state related to the second object 120 .
  • status information include, but are not limited to: whether the second object 120 has arrived, the current location of the second object 120, the distance of the second object 120 from the destination location 130, the time when the second object 120 is expected to arrive at the destination location 130, or any combination of them.
  • the merging efficiency of the first object 110 and the second object 120 can be further improved.
  • the terminal device 140 may also use the appearance information to determine whether the second object 120 exists in the live image 410 . In some implementations, the terminal device 140 may perform object detection on the live image 410 based on the appearance information only when it is determined that the distance of the second object 120 from the destination location 130 is less than a predetermined threshold to determine whether there is a second object in the live image 410 Object 120. In this way, the terminal device 140 can be prevented from performing inefficient calculations.
  • the terminal device 140 may present a third visual element in a corresponding area of the live image 410 , wherein the third visual element indicates that the second object 120 is in the live image 410 s position.
  • the terminal device 140 may, for example, recognize that the real image 410 includes the corresponding second object 120 (eg, a vehicle) based on appearance information (eg, one or more of license plate number, model, and color), the terminal The device 140 may, for example, generate a line around the boundary of the second object 120 in the live image 410 to indicate the position of the second object 120 .
  • appearance information eg, one or more of license plate number, model, and color
  • the terminal device 140 may also use a visual element such as a pushpin to indicate 620 the current position of the second object 120 , so as to guide the first object 110 to merge with the second object 120 .
  • a visual element such as a pushpin
  • end device 140 may also present a street view picture associated with destination location 130 in navigation interface 150 . Specifically, when the first object 110 is close to the destination location 130 , the first object 110 may also expect to be able to accurately locate the destination location 130 in the live image 410 .
  • the terminal device 140 may automatically present a street view picture with the destination location 130 in the navigation interface 150 .
  • the terminal device 140 may also automatically present a street view picture with the destination location 130 in the navigation interface 150 .
  • the Street View picture may be presented superimposed on the live image 410 in a predetermined area, where the predetermined area is determined based on the destination location.
  • the terminal device 140 may determine a predetermined area in which the street view picture is displayed in the image coordinate system based on the direction and distance of the destination location from the current location.
  • the size of the predetermined area can also be dynamically changed according to the distance of the current location from the destination location, for example.
  • the first object 110 can be more effectively assisted in locating the destination location, or locating the environmental features around the destination location.
  • the terminal device 140 may also present a reminder in the navigation interface that the second object 120 has reached the destination location 130 .
  • the terminal device 140 may generate a reminder in the navigation interface 150 to inform that the vehicle has arrived. This can remind the user to speed up appropriately, so as to avoid the vehicle waiting for too long.
  • the embodiments of the present disclosure can provide more intuitive navigation for the objects to be merged by utilizing the real-life images around the objects, thereby improving the efficiency of the merge between the objects.
  • FIG. 7 shows a schematic structural block diagram of a navigation apparatus 700 according to some embodiments of the present disclosure.
  • the apparatus 700 includes a destination location obtaining module 710 configured to obtain a destination location associated with a first object, wherein the destination location is also associated with a second object, the destination location indicating the first object A predetermined location where the second object can be converged.
  • the apparatus 700 also includes a current location determination module 720 configured to determine the current location of the first object.
  • the apparatus 700 further includes a navigation module 730 configured to provide a navigation interface including navigation elements superimposed on the live image associated with the current location, the navigation elements being based on the destination location and the current position.
  • the live image is acquired by a first terminal device associated with the first object.
  • the second object includes a vehicle for travel services
  • the destination location obtaining module 710 includes an order parsing module configured to determine, based on a travel order associated with the first object, the travel order associated with the travel The docking location associated with the order as the destination location.
  • the stop location is automatically determined based on the travel order.
  • the second object is a parked vehicle
  • the destination location acquisition module 710 includes a parked location determination module configured to determine a parked location of the vehicle as the destination location.
  • the parking location is automatically recorded after the vehicle has completed parking.
  • the first object and the second object include a group of users to be reunited, and the destination location is a designated rendezvous location for the group of users.
  • the navigation interface provides the first object with walking or cycling navigation to the destination.
  • the current position determination module 720 includes: a positioning information acquisition module, configured to acquire positioning information of the first terminal device associated with the first object, the positioning information includes inertial navigation position estimation information and auxiliary positioning information, auxiliary positioning information
  • the positioning information includes at least one of the following: GPS positioning information, visual positioning information, or prediction information of a positioning model, wherein the positioning model is a machine learning model, and the positioning model is configured to determine the first terminal device based on inertial navigation sensor data of the first terminal device. an expected position or expected velocity of the object; and a correction module configured to correct the inertial navigation position estimation information based on the auxiliary positioning information to determine the current position of the first object.
  • the positioning model is trained based on training inertial navigation sensor data and corresponding ground-truth position information, the training inertial navigation sensor data is obtained by the first device, and the ground-truth position information is the positioning accuracy higher than predetermined
  • the threshold is determined by the second device, and the first device and the second device are physically coupled to move in synchrony.
  • the correction module includes an adjustment module configured to adjust the state equation used to determine the inertial navigation position estimation information with the auxiliary positioning information as a constraint.
  • inertial navigation position reckoning information is determined based on a Kalman filter.
  • the navigation elements include directional elements for indicating the direction of the destination location relative to the current location.
  • the navigation interface further includes a two-dimensional map portion including a first visual element corresponding to the current location and destination location of the first object.
  • the two-dimensional map portion further includes a second visual element for indicating the direction of the destination location relative to the current location.
  • the two-dimensional map portion can be presented or collapsed in the navigation interface in response to predetermined operations on the navigation interface.
  • the apparatus 700 further includes: an angle determination module configured to determine an orientation angle of the first terminal device; and a first reminder module configured to respond to the orientation angle and an orientation angle of the destination location relative to the current location The difference is greater than a predetermined threshold, and a first reminder that the current traveling direction of the first object may be wrong is presented in the navigation interface.
  • the apparatus 700 further includes: a point-of-interest determination module configured to determine at least one point of interest associated with the live-action image; and a point-of-interest information providing module configured to be associated with the at least one point of interest in the live-action image At the corresponding location, information associated with the at least one point of interest is presented.
  • the apparatus 700 further includes: a street view picture presentation module configured to present a street view picture associated with the destination location in response to at least one of: a distance between the current location of the first object and the destination location is less than a predetermined threshold; or a predetermined operation for viewing a street view picture is received on the navigation interface.
  • a street view picture presentation module configured to present a street view picture associated with the destination location in response to at least one of: a distance between the current location of the first object and the destination location is less than a predetermined threshold; or a predetermined operation for viewing a street view picture is received on the navigation interface.
  • the Street View image is presented superimposed on the live image in a predetermined area, the predetermined area being determined based on the destination location.
  • the apparatus 700 further includes an object information providing module configured to present object information associated with the second object in the navigation interface in response to determining that the distance between the current location and the destination location is less than a predetermined threshold.
  • the object information includes at least one of the following: appearance information for describing appearance characteristics of the second object; and state information for describing at least one of the following: whether the second object has arrived, the first The current position of the two objects, the distance of the second object from the destination location, or the time when the second object is expected to arrive at the destination location.
  • the apparatus 700 further includes: an appearance information obtaining module configured to obtain appearance information associated with the second object; and an identification module configured to determine whether the second object exists in the live image based on the appearance information.
  • the identification module includes an object detection module configured to perform object detection for the live-action image based on the appearance information in response to determining that the distance of the second object from the destination location is less than a predetermined threshold to determine whether the There is a second object.
  • the apparatus 700 further includes an object prompting module configured to, in response to determining that the second object exists in the live image, present a third visual element in a corresponding area of the live image, the third visual element indicating that the second object is The location in the live image.
  • the apparatus 700 further includes a second reminder module configured to present a second reminder that the first object has arrived near the destination location in response to determining that the distance between the current location and the destination location is less than a predetermined threshold.
  • the apparatus 700 further includes: a conversation presentation module configured to present a conversation area in association with the live image, the conversation area being configured to: present a message from a second terminal device associated with the second object; or A message is generated for transmission to the second terminal device.
  • a conversation presentation module configured to present a conversation area in association with the live image, the conversation area being configured to: present a message from a second terminal device associated with the second object; or A message is generated for transmission to the second terminal device.
  • the session area is presented in response to at least one of: receiving a session invocation operation; or receiving a message from the second terminal device.
  • the conversation area is automatically collapsed in response to at least one of: no user action is received for the conversation area for a predetermined period of time, and no message is received from the second terminal device.
  • the apparatus 700 further includes: a third reminder module configured to, in response to determining that the distance between the current location of the second object and the destination location is less than a predetermined threshold, present information about the second object having reached the destination in the navigation interface The third reminder of the location.
  • a third reminder module configured to, in response to determining that the distance between the current location of the second object and the destination location is less than a predetermined threshold, present information about the second object having reached the destination in the navigation interface The third reminder of the location.
  • the units included in the apparatus 700 may be implemented in various manners, including software, hardware, firmware, or any combination thereof.
  • one or more units may be implemented using software and/or firmware, such as machine-executable instructions stored on a storage medium.
  • some or all of the units in apparatus 700 may be implemented, at least in part, by one or more hardware logic components.
  • exemplary types of hardware logic components include field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standards (ASSPs), systems on chips (SOCs), complex programmable logic devices (CPLD), etc.
  • FIG. 8 illustrates a block diagram of a computing device/server 800 in which one or more embodiments of the present disclosure may be implemented. It should be understood that the computing device/server 800 shown in FIG. 8 is merely exemplary and should not constitute any limitation on the functionality and scope of the embodiments described herein.
  • computing device/server 800 is in the form of a general purpose computing device.
  • Components of computing device/server 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage devices 830, one or more communication units 840, one or more input devices 850, and one or more Output device 860.
  • the processing unit 810 may be an actual or virtual processor and can perform various processes according to programs stored in the memory 820 . In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of the computing device/server 800 .
  • Computing device/server 800 typically includes a number of computer storage media. Such media may be any available media accessible by computing device/server 800, including but not limited to volatile and nonvolatile media, removable and non-removable media.
  • Memory 820 may be volatile memory (eg, registers, cache, random access memory (RAM)), non-volatile memory (eg, read only memory (ROM), electrically erasable programmable read only memory (EEPROM) , Flash) or some combination of them.
  • Storage device 830 may be removable or non-removable media, and may include machine-readable media, such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data (eg, training data for training). ) and can be accessed within computing device/server 800.
  • Computing device/server 800 may further include additional removable/non-removable, volatile/non-volatile storage media.
  • disk drives for reading or writing from removable, non-volatile magnetic disks (eg, "floppy disks") and for reading or writing from removable, non-volatile optical disks may be provided CD-ROM drive for reading or writing.
  • each drive may be connected to a bus (not shown) by one or more data media interfaces.
  • Memory 820 may include a computer program product 825 having one or more program modules configured to perform various methods or actions of various embodiments of the present disclosure.
  • the communication unit 840 enables communication with other computing devices through a communication medium. Additionally, the functions of the components of computing device/server 800 may be implemented in a single computing cluster or multiple computing machines capable of communicating over a communication connection. Thus, computing device/server 800 may operate in a networked environment using logical connections to one or more other servers, network personal computers (PCs), or another network node.
  • PCs network personal computers
  • Input device 850 may be one or more input devices, such as a mouse, keyboard, trackball, and the like.
  • Output device 860 may be one or more output devices, such as a display, speakers, printer, and the like.
  • the computing device/server 800 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., through the communication unit 840, as needed, with one or more external devices that enable the user to communicate with the computing device/server. 800 interacts with any device (eg, network card, modem, etc.) that enables computing device/server 800 to communicate with one or more other computing devices. Such communication may be performed via an input/output (I/O) interface (not shown).
  • I/O input/output
  • a computer-readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method described above.
  • These computer readable program instructions may be provided to the processing unit of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processing unit of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer-readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executables for implementing the specified logical function(s) instruction.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.

Abstract

一种导航方法、装置、设备、存储介质和程序产品。方法包括:获取与第一对象相关联的目的地位置,其中目的地位置还与第二对象相关联,目的地位置指示第一对象能够汇合第二对象的预定地点(202);确定第一对象的当前位置(204);以及提供导航界面,导航界面包括叠加在与当前位置相关联的实景图像上的导航元素,导航元素是基于目的地位置和当前位置而确定的(206)。可以为待汇合的对象提供更为直观的导航,从而提高对象之间汇合的效率。

Description

导航方法和装置
交叉引用
本申请要求于2021年1月18日提交的中国专利申请No.202110062452.2的优先权,其全部内容通过引用结合于此。
技术领域
本公开的各实现方式涉及智能交通领域,更具体地,涉及导航方法、装置、设备、存储介质和程序产品。
背景技术
在人们的日常生活中,人们经常需要移动到另一位置以与特定的对象汇合。例如,在交通出行场景中,乘客可能需要步行到行程中指定的上车点以搭乘服务车辆。然而,这样的汇合位置可能是系统所自动指定的,并且不一定为人们所熟悉。因此,用户行进到这样的汇合位置的过程中可能会遇到困扰。
发明内容
本公开的实施例提供了一种用于导航的方案。
在本公开的第一方面,提供了一种导航方法。该方法包括:获取与第一对象相关联的目的地位置,其中目的地位置还与第二对象相关联,目的地位置指示第一对象能够汇合第二对象的预定地点;确定第一对象的当前位置;以及提供导航界面,导航界面包括叠加在与当前位置相关联的实景图像上的导航元素,导航元素是基于目的地位置和当前位置而确定的。
在本公开的第二方面,提供了一种导航装置。该装置包括:目的地位置获取模块,被配置为获取与第一对象相关联的目的地位置,其中目的地位置还与第二对象相关联,目的地位置指示第一对象能够汇合第二对象的预定地点;当前位置确定模块,被配置为确定第一对象的当前位置;以及导航模块,被配置为提供导航界面,导航界面包括叠加在与当前位置相关联的实景图像上的导航元素,导航元素是基于目的地位置和当前位置而确定的。
在本公开的第三方面,提供了一种电子设备,包括一个或多个处理器以及存储器,其中存储器用于存储计算机可执行指令,计算机可执行指令被一个或多个处理器执行以实现根据本公开的第一方面的方法。
在本公开的第四方面,提供了一种计算机可读存储介质,其上存储有计算机可执行指令,其中计算机可执行指令在被处理器执行时实现根据本公开的第一方面的方法。
在本公开的第五方面,提供了一种计算机程序产品,其包括计算机可执行指令,其中计算机可执行指令在被处理器执行时实现根据本公开的第一方面的方法。
根据本公开的实施例,可以为待汇合的对象提供更为直观的导航,从而提高对象之间汇合的效率。
提供发明内容部分是为了以简化的形式来介绍对概念的选择,它们在下文的具体实施方式中将被进一步描述。发明内容部分无意标识本公开的关键特征或必要特征,也无意限制本公开的范围。
附图说明
结合附图并参考以下详细说明,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标注表示相同或相似的元素,其中:
图1A至图1C示出了本公开的一些实施例能够在其中实现的示例环境的示意图;
图2示出了根据本公开的一些实施例的示例导航过程的流程图;
图3示出了根据本公开的一些实施例的确定当前位置的示例过程的流程图;
图4示出了根据本公开的一些实施例的示例导航界面的示意图;
图5示出了根据本公开的另一些实施例的示例导航界面的示意图;
图6示出了根据本公开的又一些实施例的示例导航界面的示意图;
图7示出了根据本公开的一些实施例的导航装置的示意性结构框图;以及
图8示出了能够实施本公开的多个实施例的计算设备的框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。
如上文所讨论的,在不同对象需要在特定地点汇合的场景中,这样的汇合地点可能是对象所不熟悉的位置。这使得对象难以高效地行进到指定的汇合位置。一些传统的方案通过普通地图导航的方式来引导对象的行进。然而,在用户不熟悉汇合地点的情况下,用户仍然难以准确地行进到汇合地点。
有鉴于此,本公开的实施例提出了导航方案。在该方案中,首先,获取与第一对象相关联的目的地位置,其中目的地位置还与第二对象相关联,并且目的地位置指示第一对象能够汇合第二对象的预定地点。随后,确定第一对象的当前位置,并提供导航界面,其中导航界面包括叠加在与当前位置相关联的实景图像上的导航元素,并且导航元素是基于所述目的地位置和所述当前位置而确定的。
根据这样的方案,可以利用对象周边的实景图像来为待汇合的对象提供更为直观的导航,从而提高对象之间汇合的效率。
以下将继续参考附图描述本公开的一些示例实施例。
示例环境
首先参见图1A至图1C,其示意性示出了本公开的实施例可以在其中被实施的环境的示意图。
图1A示出了根据本公开实施例的第一示例环境100A。在图1A的示例环境100A中,第一对象110可以包括用户,第二对象120可以包括用于服务与第一对象110相关联的出行行程的交通工具。
如图1A所示,第一对象110可以与终端设备140(也称为第一终端设备)相关联。终端设备140例如可以是具有定位能力的移动设备,例如,智能手机、平板电脑、个人数字助理PDA或智能可穿戴设备(例如,智能手表、智能眼镜、智能手环等)。
在图1A的示例中,终端设备140可以确定第一对象110需要行进到目的地位 置130以与第二对象120汇合。在一些实现中,目的地位置130例如可以基于第一对象110的出行订单所确定的。
示例性地,终端设备140可以获取第一对象110的出行订单,并且获取出行平台所指定的停靠位置,例如上车点,并将其作为目的地位置130。
在一些实现中,该停靠位置例如可以是由出行平台基于出行订单中的起点位置而自动确定的。例如,在出行订单的起点附近的、且不落入禁限停范围内的停靠位置可以被自动地提供。在这样的情况下,这样的目的地位置130通常以绝对的坐标值来表示,这使得第一对象难以准确地理解目的地位置130的准确位置。
在又一些示例中,第一对象110与第二对象120汇合后,可能不要搭载该交通工具。示例性地,第二对象120可以是用于派送货物的交通工具,其与第一对象110汇合,以装载第一对象110需要运输的货物,并将该活动运输到指定行程指点的终点。
如图1A所示,在第一对象110开始往目的地位置130行进时,终端设备140可以提供导航界面150。在一些实现中,终端设备140可以获取用于引导第一对象行进到目的地位置130的至少一个导航元素。这样的导航元素例如可以是适当的计算设备(例如,终端设备140或提供导航服务的服务器)基于第一对象110的当前位置和目的地位置130来确定的。应当理解,在由终端设备140直接确定导航元素的情况下,这可以使得终端设备140在没有通信网络的情况下也能够保证导航的正常提供。
附加地,在导航界面150中,终端设备140可以将这样的导航元素叠加在与当前位置相关联的实景图像上。在一些实现中,实景图像可以是由终端设备140的图像捕获装置所获取的。例如,在第一对象110往目的地位置130行进时,第一对象110例如可以手持终端设备140,例如智能手机。该终端设备140在第一对象110的行进过程中可以保持图像捕获装置开启,以捕获当前位置的实景图像,并通过显示设备来呈现导航界面150。这样的导航界面150可以包括所捕获的实景图像以及叠加显示在实景图像上的导航元素。
在一些实现中,终端设备140例如可以包括智能眼镜或其他智能头戴式设备。相应地,在第一对象110往目的地位置130行进时,终端设备140可以获取至少一个导航元素,并利用智能眼镜来将至少一个导航元素实时地叠加在对应的实景图像上。应当理解,在这种情况下,实景图像是第一对象110所真实看到的环境图像,而不是由终端设备140所捕获的数字图像。
在一些实现中,第一对象110可以通过适当的方式行进到目的地位置130。例如,第一对象110可以通过步行的方式行进到目的地位置130。或者,第一对象110也可以通过骑行的方式行进到目的地位置130。
通过这样的方式,本公开的实施例能够高效地将用户引导到出行订单中的指定服务位置,例如,上车位置、接货位置等,从而提高用户与服务订单的车辆的汇合效率。
图1B示出了根据本公开另一实施例的第二示例环境100B。在图1B的示例环境100B中,第一对象110和第二对象120可以包括待汇合的一组用户。
如图1B所示,第一对象110可以与终端设备140相关联。终端设备140例如可以是具有定位能力的移动设备,例如,智能手机、平板电脑、个人数字助理PDA或智能可穿戴设备(例如,智能手表、智能眼镜、智能手环等)。
在图1B的示例中,终端设备140可以确定第一对象110需要行进到目的地位置130以与第二对象120汇合。在一些实现中,目的地位置130可以是由系统为该组用户所自动确定的汇合地点。例如,系统可以根据该组用户中每个用户的当前位置,自动地确定一个汇合地点,以作为目的地位置130。或者,该目的地位置130例如也可以是由该组用户中的一个用户所指定的。在这样的情况下,部分用户可能对于系统所指定的 汇合地点或其他用户指定的汇合地点并不熟悉,这导致第一对象110难以准确地理解目的地位置130的准确位置。
如图1B所示,在第一对象110开始往目的地位置130行进时,终端设备140可以提供导航界面150。在一些实现中,终端设备140可以获取用于引导第一对象行进到目的地位置130的至少一个导航元素。这样的导航元素例如可以是适当的计算设备(例如,终端设备140或提供导航服务的服务器)基于第一对象110的当前位置和目的地位置130来确定的。应当理解,在由终端设备140直接确定导航元素的情况下,这可以使得终端设备140在没有通信网络的情况下也能够保证导航的正常提供。
附加地,在导航界面150中,终端设备140可以将这样的导航元素叠加在与当前位置相关联的实景图像上。在一些实现中,实景图像可以是由终端设备140的图像捕获装置所获取的。例如,在第一对象110往目的地位置130行进时,第一对象110例如可以手持终端设备140,例如智能手机。该终端设备140在第一对象110的行进过程中可以保持图像捕获装置开启,以捕获当前位置的实景图像,并通过显示设备来呈现导航界面150。这样的导航界面150可以包括所捕获的实景图像以及叠加显示在实景图像上的导航元素。
在一些实现中,终端设备140例如可以包括智能眼镜或其他智能头戴式设备。相应地,在第一对象110往目的地位置130行进时,终端设备140可以获取至少一个导航元素,并利用智能眼镜来将至少一个导航元素实时地叠加在对应的实景图像上。应当理解,在这种情况下,实景图像是第一对象110所真实看到的环境图像,而不是由终端设备140所捕获的数字图像。
在一些实现中,第一对象110可以通过适当的方式行进到目的地位置130。例如,第一对象110可以通过步行的方式行进到目的地位置130。或者,第一对象110也可以通过骑行的方式行进到目的地位置130。
通过这样的方式,本公开的实施例能够高效地将用户引导到一组用户的指定汇合地点,从而提高用户之间汇合的效率。
图1C示出了根据本公开实施例的第三示例环境100C。在图1C的示例环境100C中,第一对象110可以包括用户,第二对象120停泊在特定位置的交通工具,例如,汽车、电动车或自行车等。
如图1C所示,第一对象110可以与终端设备140相关联。终端设备140例如可以是具有定位能力的移动设备,例如,智能手机、平板电脑、个人数字助理PDA或智能可穿戴设备(例如,智能手表、智能眼镜、智能手环等)。
在图1C的示例中,终端设备140可以确定第一对象110需要行进到目的地位置130以与第二对象120汇合。在一些实现中,终端设备140可以从其他计算设备获取第二对象120的停泊位置以作为目的地位置。例如,在特定用户(可以是第一对象110,或与第一对象110不同的用户)在车辆停泊在特定停车位置后,与该特定用户相关联的终端设备(例如,该特定用户的移动设备或该车辆终端)可以自动地将当前位置确定为停泊位置,并将该位置例如上传到服务器,或者直接发送至终端设备140。
相应地,终端设备140例如可以从服务器获取该停泊位置或者从与特定用户相关联的终端设备接收该停泊位置。或者,在由第一对象110参与停泊过程(例如,作为驾驶者或乘客)的过程中,终端设备140也可以自动地记录停泊位置。
在第一对象110需要寻车的情况下,终端设备140可以自动地获取第二对象120的停泊位置,以作为目的地位置130。在车辆被停泊在用户不熟悉的位置时,用户通常很难通过记忆高效地行进到车辆的停泊位置。这使得用户通常要花费大量无效的时间来寻找车辆。
如图1C所示,终端设备140可以终端设备140可以提供导航界面150,以引导第一对象110行进到目的地位置130,也即第二对象120的停泊位置。
在一些实现中,终端设备140可以获取用于引导第一对象行进到目的地位置130的至少一个导航元素。这样的导航元素例如可以是适当的计算设备(例如,终端设备140或提供导航服务的服务器)基于第一对象110的当前位置和目的地位置130来确定的。应当理解,在由终端设备140直接确定导航元素的情况下,这可以使得终端设备140在没有通信网络的情况下也能够保证导航的正常提供。
附加地,在导航界面150中,终端设备140可以将这样的导航元素叠加在与当前位置相关联的实景图像上。在一些实现中,实景图像可以是由终端设备140的图像捕获装置所获取的。例如,在第一对象110往目的地位置130行进时,第一对象110例如可以手持终端设备140,例如智能手机。该终端设备140在第一对象110的行进过程中可以保持图像捕获装置开启,以捕获当前位置的实景图像,并通过显示设备来呈现导航界面150。这样的导航界面150可以包括所捕获的实景图像以及叠加显示在实景图像上的导航元素。
在一些实现中,终端设备140例如可以包括智能眼镜或其他智能头戴式设备。相应地,在第一对象110往目的地位置130行进时,终端设备140可以获取至少一个导航元素,并利用智能眼镜来将至少一个导航元素实时地叠加在对应的实景图像上。应当理解,在这种情况下,实景图像是第一对象110所真实看到的环境图像,而不是由终端设备140所捕获的数字图像。
在一些实现中,第一对象110可以通过适当的方式行进到目的地位置130。例如,第一对象110可以通过步行的方式行进到目的地位置130。或者,第一对象110也可以通过骑行的方式行进到目的地位置130。
通过这样的方式,本公开的实施例能够帮助用户高效地寻找到停泊的车辆,从而降低用户在该过程需要耗费的时间成本。
以上结合图1A至图1C描述了本公开的实施例能够在其中被实施的示例场景。应当理解,在不违背本公开精神的情况下,根据本公开的导航方案还可以被用于其他适当的汇合场景中。
示例过程
以下将结合图2至图6来详细地描述根据本公开实施例的导航过程。图2示出了根据本公开的一些实施例的导航过程200的示意图。为便于讨论,参考图1来讨论具体的导航过程。过程200例如可以在图1所示的终端设备140处被执行。应当理解,过程200还可以包括未示出的框和/或可以省略所示出的框。本公开的范围在此方面不受限制。
如图2所示,在框202,终端设备140获取与第一对象110相关联的目的地位置130,其中目的地位置130还与第二对象120相关联,目的地位置130指示第一对象110能够汇合第二对象120的预定地点。
如参考图1A至图1C所讨论的,目的地位置130可以是交通工具的停靠位置、一组用户的汇合位置或交通工具的停泊位置等。应当理解,在本公开中,目的地位置130是用于对象之间汇合的,其通常是由系统自动地指定,进而导致用户可以难以准确地自主行进到该目的地位置130。
如上文所讨论的,终端设备140可以通过适当的方式来确定目的地位置130,在此不再重复描述。在一些实现中,目的地位置例如可以利用由经纬度组成的坐标来表示。
在框204,终端设备140确定第一对象110的当前位置。在一些实现中,终端 设备140可以定期地确定第一对象的当前位置,例如以1Hz为频率,从而能够为第一对象提供更为实时且准确的导航。
取决于具体的场景,终端设备140可以利用适当的定位技术来确定第一对象110的当前位置。例如,在室外步行或骑行导航的场景中,终端设备140可以例如GPS定位技术和惯性导航定位技术等来确定当前位置。在室内导航的场景中,终端设备140例如也可以基于视觉定位或者UWB定位技术等来获取当前位置。
然而,在室外定位的场景中,GPS信号的不稳定可能导致定位出现较大的抖动或者偏差。为了提高定位的准确性,终端设备140还可以通过融合滤波的方式来获得更为准确的当前位置。以下将参考图3来描述框204的具体过程。
如图3所示,在框302,终端设备140可以获取与第一对象110相关联的终端设备140的定位信息,其中定位信息包括惯性导航位置推算信息和辅助定位信息。在一些实现中,辅助定位信息可以包括以下中的至少一项:GPS定位信息、视觉定位信息或定位模型的预测信息。附加地,定位模型可以为机器学习模型,并且被配置为基于终端设备140的惯性导航传感器数据确定第一对象110的预期位置或预期速度。
在一些实现中,在步行导航的场景中,终端设备140可以基于PDR(行人航位推算)技术来确定惯性导航推算信息。附加地,终端设备140还可以获取GPS定位信息或者视觉定位信息,并利用GPS定位信息或者视觉定位信息来校正经PDR技术确定的位置。
在一些实现中,终端设备140还可以利用定位模型来获得预测信息。示例性地,该定位模型可以获取终端设备140在过去预定时间段内的惯性导航传感器数据,例如,陀螺仪数据和/或加速度计数据,并能够基于这些惯性导航传感器数据来预测第一对象在该时刻的位置(已知初始位置的情况下)或速度。
在一些实现中,该定位模型是基于训练惯性导航传感器数据和对应的真值位置信息而被训练的,其中训练惯性导航传感器数据是由第一设备获取的,真值位置信息是定位准确性高于预定阈值的第二设备所确定的,第一设备和第二设备被物理上耦合以同步运动,从而建立所获取的训练惯性导航传感器数据和真值位置信息之间的关联。
示例性地,在训练定位模型的过程中,可以利用具有惯性导航传感器的第一设备以及具有更为准确定位能力的第二设备来获取用于训练的数据。具体地,第一设备和第二设备可以物理上耦合,以使得两者与同一物理位置相关联。例如,第一设备和第二设备可以物理上被绑定。
附加地,第一设备和第二设备执行预定的运动,并获取第一设备的惯性导航传感器数据以及第二设备的定位数据。在一些实现中,可以通过适当的时钟对齐技术,以使得惯性导航传感器数据被关联到相应的定位数据,从而形成训练数据。
这样的训练数据可以被输入到适当的机器学习模型(包括但不限于,深度神经网络、卷积神经网络、支持向量机模型或决策树模型等)。具体地,一段预定时间内的训练惯性导航传感器数据可以作为模型的输入特征,由第二设备获取的真值位置信息和/或基于真值位置信息确定的速度信息可以作为模型的参考真值(ground-truth),以使得该机器学习模型基于输入特征所预测的位置和/速度与真值的差距小于预定的阈值。基于这样的方式,可以获得能够根据一定时间段内惯性传感器数据来预测当前速度和/或当前位置的预测模型。
在框304,终端设备140可以基于辅助定位信息校正惯性导航位置推算信息,以确定第一对象110的当前位置。在一些实现中,终端设备140可以以辅助定位信息作为约束,调整用于确定惯性导航位置推算信息的状态方程。示例性地,惯性导航位置推算信息是基于卡尔曼滤波器而被确定的。
在一些实现中,终端设备140例如可以基于辅助定位信息与惯性导航位置推算信息的差,来确定针对惯性导航位置推算信息的校正量。
基于以上讨论的融合定位技术,本公开的实施例能够排除由于GPS信号丢失或者漂移所带来的干扰,从而提高定位的准确性。
继续参考图2,在框206,终端设备140提供导航界面150,其中导航界面150包括叠加在与当前位置相关联的实景图像上的导航元素,导航元素是基于目的地位置130和当前位置而确定的。
图4示出了根据本公开实施例的示例导航界面150。导航界面150例如可以是通过终端设备140的显示设备所呈现的。应当理解,在智能眼睛或智能头戴式设备的场景中,导航界面150的呈现形式可以是不同的。
如图4所示,导航界面150可以包括实景图像410。如上文所讨论的,该实景图像410例如可以是由终端设备140所捕获的。在一些实现中,导航界面150中还包括叠加在实景图像410上的导航元素415-1和415-2。
如图4所示,导航元素415-1例如可以是定向元素,其用于指示目的地位置130相对于当前位置的方向。通过呈现定向元素而不是生成一条完成的导航路径,这可以减少终端设备140的计算量。此外,考虑到在某些场景中,例如出行场景,第一对象110通常与目的地位置130的位置相对较小,通过定向元素已经能够有效地指引到第一对象110行进到目的地位置130。
在一些实现中,导航元素415-2例如可以包括通过文本元素,以例如指示目的地位置130与当前位置的距离。这能够使得第一对象110更好地估计行进到目的地位置130所需要的时间。
在又一些实现中,导航界面150例如也可以包括其他适当的导航元素,这样的导航元素例如也可以基于从当前位置到目的地位置130的实时路径所确定的。示例性地,导航界面150可以包括路径导航元素,以引导第一对象110沿该路径行进到目的地位置130。
在一些实现中,如图4所示,导航界面150还可以包括二维地图部分420,其中二维地图部分420包括与第一对象110的当前位置和目的地位置130对应的视觉元素425-1和425-2。
在一些实现中,二维地图部分还420例如还包括用于指示目的地位置130相对于当前位置的方向的第二视觉元素430。
通常结合二维地图和实景图像引导,本公开的实施例能够使得第一对象能够直观地了解其需要进行的路线,从而提高导航的准确性和用户交互的友好程度。
在一些实现中,二维地图部分420例如能够响应于在导航界面150上的预定操作而在导航界面150中被呈现或收起。示例性地,二维地图部分420例如可以默认是呈现状态,第一对象110例如可以通过点击收起控件440以将二维地图部分420收起,从而留出更大的面积来呈现实景图像410。随后,根据第一对象110的特定操作(例如,从底部上滑),二位地图部分420能够重新被呈现。
或者,二维地图部分420例如可以默认是收起状态,第一对象110可以执行预定操作(例如,从底部上滑)来使得二维地图部分420被呈现。应当理解,这样的具体交互方式仅是示意性的,还可以采用任何适当的交互设计来使得二维地图部分420被收起或者被呈现。
在一些实现中,终端设备140还可以确定终端设备140的朝向角,并且响应于朝向角与目的地位置130相对于当前位置的方向角的差大于预定阈值,终端设备140还可以在导航界面150中呈现关于第一对象110的当前行进方向可能错误的第一提醒。
示例性地,以图4作为示例,当前定向元素415-1指示终端设备140目前的朝向是适当的,也即第一对象110的行进方向是准确的。相反,如果终端设备140的朝向是图4所示的朝向的相反方向,那么终端设备140例如可以在导航界面150中呈现关于目的地位置130在第一对象110的当前朝向的正后方的提醒,并提醒用户调整行进方向。
在一些实现中,导航界面150例如还可以呈现与实景图像410相关联的兴趣点信息。示例性地,终端设备140可以确定与实景图像410相关联的至少一个兴趣点。图5示出了示出了根据本公开的另一些实施例的示例导航界面的示意图。如图5所示,终端设备140例如可以基于地图信息或视觉识别技术确定实景图像410中包括了兴趣点“XX咖啡店”。
附加地,终端设备140可以在实景图像410中与至少一个兴趣点相对应的位置处,呈现与至少一个兴趣点相关联的信息520。如图5所示,终端设备140例如可以在与兴趣点“XX咖啡店”所对应的图像区域呈现信息520。应当理解,图5中的信息520的具体内容仅是示意性的,可以根据需要呈现与该兴趣点有关的任何适当信息。
应当理解,在智能眼镜的场景中,信息520可以被叠加呈现在第一对象110的真实视场中,而无需捕获或呈现数字图像。
通过这样的方式,本公开的实施例还能够为第一对象110提供到目的地位置130的行进过程中的相关兴趣点信息。例如,当汇合时间可能还相对较长时,这能够帮助第一对象110更为便捷地发现一些期望的兴趣点,例如,可以在咖啡店等待第二对象120达到汇合地点。
在一些实现中,终端设备140例如还可以在导航界面150中与实景图像410相关联地呈现会话区域510。在一些实现中,会话区域510例如可以呈现来自与第二对象120相关联的第二终端设备的消息。如图5所示,会话区域510例如可以呈现来自驾驶交通工具的司机通过终端设备所发送的信息。
备选地或附加地,会话区域510例如还可以生成发送到第二终端设备的消息。例如,第一对象110可以通过会话区域510来回复来自第二终端设备的消息,或者主动地发送消息到第二终端设备。
通过这样的方式,可以更为有效地促进不同对象到目的地位置的汇合,从而提高对象汇合的效率。
在一些实现中,为了避免会话区域510可能影响到实景图像410的显示,会话区域510例如可以在一定条件下才会被呈现。在一些实现中,当终端设备140接收到会话唤起操作时,终端设备140可以在导航界面150中呈现会话区域510。备选地或附加地,当终端设备140接收到来自第二终端设备的消息时,终端设备140也可以自动地在导航界面150中呈现会话区域510,以呈现所接收的消息。
在一些实现中,会话区域510例如还可以被自动地或手动地收起。示例性地,当终端设备140在预定时间段内没有接收到针对会话区域510的用户操作,并且没有接收到来自第二终端设备的消息时,会话区域510可以被自动地收起,以避免干扰第一对象110对于导航界面150的查看。
图6示出了示出了根据本公开的另一些实施例的示例导航界面的示意图。如图6所示,当第一对象110的当前位置与目的地位置130的距离小于预定阈值时,终端设备140例如可以呈现关于第一对象已经到达目的地位置附近的提醒,例如文本“您已经抵达目的地附近,请耐心等待。”
在一些实现中,当确定当前位置和目的地位置130的距离小于预定阈值时,终端设备140还可以在导航界面150中呈现与第二对象120相关联的对象信息610。如图 6所示,对象信息610例如可以包括用于描述第二对象的外貌特征的外貌信息。
以第二对象120为车辆作为示例,外貌信息例如可以包括车辆的车牌号、颜色或型号等。应当理解,终端设备140例如可以基于第一对象的行程订单来获取这样的外貌信息。
以第二对象120为用户作为示例,外貌信息例如可以包括其他用户的性别、身高或衣着等。这样的外貌信息例如可以由其他用户主动地上传至服务器以由终端设备140获取,或直接发送至终端设备140。
在一些实现中,对象信息例如还可以包括状态信息,其用于描述与第二对象120有关的状态。状态信息的示例包括但不限于:第二对象120是否已经到达、第二对象120的当前位置、第二对象120距目的地位置130的距离、第二对象120预期到达目的地位置130的时间、或它们的任何组合。
通过提供第二对象120的对象信息,可以进一步提高第一对象110与第二对象120的汇合效率。
在一些实现中,终端设备140还可以利用外貌信息来确定实景图像410中是否存在第二对象120。在一些实现中,终端设备140可以在确定第二对象120距离目的地位置130的距离小于预定阈值时才基于外貌信息来执行针对实景图像410的对象检测,以确定实景图像410中是否存在第二对象120。通过这样的方式,可以避免终端设备140执行无效的计算。
在一些实现中,当确定实景图像410中存在第二对象120,终端设备140可以在实景图像410的对应区域中呈现第三视觉元素,其中第三视觉元素指示第二对象120在实景图像410中的位置。
示例性地,当终端设备140例如可以基于外貌信息(例如,车牌号、车型和颜色中的一项或多项)识别出实景图像410包括对应的第二对象120(例如,车辆)时,终端设备140例如可以生成围绕该第二对象120在实景图像410中的边界的线条,以指示第二对象120的位置。
备选地,终端设备140还可以利用例如图钉等视觉元素来620指示第二对象120的当前位置,从而引导第一对象110与第二对象120汇合。
在一些实现中,终端设备140还可以在导航界面150中呈现与目的地位置130相关联的街景图片。具体地,当第一对象110接近目的地位置130时,第一对象110可能还期望能够在实景图像410中准确地定位目的地位置130。
在一些实现中,当第一对象的当前位置与目的地位置130的距离小于预定阈值时,终端设备140可以在导航界面150中自动地呈现与目的地位置130的街景图片。
在又一些实现中,当在导航界面150上接收到用于查看街景图片的预定操作时,终端设备140也可以在导航界面150中自动地呈现与目的地位置130的街景图片。
在一些实现中,街景图片可以被叠加呈现在实景图像410上的预定区域,其中预定区域是基于目的地位置而被确定的。例如,终端设备140可以基于目的地位置距离当前位置的方向和距离,而在图像坐标系中确定显示该街景图片的预定区域。在一些实现中,预定区域的大小例如还可以根据当前位置与目的地位置的距离而动态地变化。
通过这样的方式,能够更为有效地帮助第一对象110定位目的地位置,或者定位目的地位置周边的环境特征。
在一些实现中,响应于确定第二对象120的当前位置与目的地位置130的距离小于预定阈值,终端设备140还可以在导航界面中呈现关于第二对象120已经到达目的地位置130的提醒。
示例性地,例如当接驾的车辆已经达到接驾位置,但用户距离目的地位置还有 一定距离时,终端设备140可以在导航界面150中生成提醒,以告知车辆已经到达。这能够提醒用户适当加快速度,从而避免车辆等待过长时间。
基于上文所讨论的导航过程,本公开的实施例能够利用对象周边的实景图像来为待汇合的对象提供更为直观的导航,从而提高对象之间汇合的效率。
示例装置和设备
本公开的实施例还提供了用于实现上述方法或过程的相应装置。图7示出了根据本公开的一些实施例的导航装置700的示意性结构框图。
如图7所示,装置700包括目的地位置获取模块710,被配置为获取与第一对象相关联的目的地位置,其中目的地位置还与第二对象相关联,目的地位置指示第一对象能够汇合第二对象的预定地点。装置700还包括当前位置确定模块720,被配置为确定第一对象的当前位置。此外,装置700还包括导航模块730,被配置为提供导航界面,所述导航界面包括叠加在与所述当前位置相关联的实景图像上的导航元素,所述导航元素是基于所述目的地位置和所述当前位置而确定的。
在一些实现中,实景图像是由与第一对象相关联的第一终端设备所获取的。
在一些实现中,第二对象包括用于出行服务的车辆,并且目的地位置获取模块710包括:订单解析模块,被配置为基于与所述第一对象相关联的出行订单,确定与所述出行订单相关联的停靠位置,以作为所述目的地位置。
在一些实现中,停靠位置是基于所述出行订单而被自动确定的。
在一些实现中,第二对象为经停泊的车辆,并且目的地位置获取模块710包括:停泊位置确定模块,被配置为确定车辆的停泊位置,以作为目的地位置。
在一些实现中,停泊位置是在车辆完成停泊后被自动记录的。
在一些实现中,第一对象和第二对象包括待汇合的一组用户,并且目的地位置是为该组用户指定的汇合地点。
在一些实现中,导航界面向第一对象提供至目的地的步行导航或骑行导航。
在一些实现中,当前位置确定模块720包括:定位信息获取模块,被配置为获取与第一对象相关联的第一终端设备的定位信息,定位信息包括惯性导航位置推算信息和辅助定位信息,辅助定位信息包括以下中的至少一项:GPS定位信息、视觉定位信息或定位模型的预测信息,其中定位模型为机器学习模型,定位模型被配置为基于第一终端设备的惯性导航传感器数据确定第一对象的预期位置或预期速度;以及校正模块,被配置为基于辅助定位信息,校正惯性导航位置推算信息,以确定第一对象的当前位置。
在一些实现中,定位模型是基于训练惯性导航传感器数据和对应的真值位置信息而被训练的,训练惯性导航传感器数据是由第一设备获取的,真值位置信息是定位准确性高于预定阈值的第二设备所确定的,第一设备和第二设备被物理上耦合以同步运动。
在一些实现中,校正模块包括:调整模块,被配置为以辅助定位信息作为约束,调整用于确定惯性导航位置推算信息的状态方程。
在一些实现中,惯性导航位置推算信息是基于卡尔曼滤波器而被确定的。
在一些实现中,导航元素包括定向元素,定向元素用于指示目的地位置相对于当前位置的方向。
在一些实现中,导航界面还包括二维地图部分,二维地图部分包括与第一对象的当前位置和目的地位置对应的第一视觉元素。
在一些实现中,二维地图部分还包括用于指示目的地位置相对于当前位置的方向的第二视觉元素。
在一些实现中,二维地图部分能够响应于在导航界面上的预定操作而在导航界面中被呈现或收起。
在一些实现中,装置700还包括:角度确定模块,被配置为确定第一终端设备的朝向角;以及第一提醒模块,被配置为响应于朝向角与目的地位置相对于当前位置的方向角的差大于预定阈值,在导航界面中呈现关于第一对象的当前行进方向可能错误的第一提醒。
在一些实现中,装置700还包括:兴趣点确定模块,被配置为确定与实景图像相关联的至少一个兴趣点;以及兴趣点信息提供模块,被配置为在实景图像中与至少一个兴趣点相对应的位置处,呈现与至少一个兴趣点相关联的信息。
在一些实现中,装置700还包括:街景图片呈现模块,被配置为响应于以下中的至少一项,呈现与目的地位置相关联的街景图片:第一对象的当前位置与目的地位置的距离小于预定阈值;或在所述导航界面上接收到用于查看街景图片的预定操作。
在一些实现中,街景图片被叠加呈现在实景图像上的预定区域,预定区域是基于目的地位置而被确定的。
在一些实现中,装置700还包括:对象信息提供模块,被配置为响应于确定当前位置和目的地位置的距离小于预定阈值,在导航界面中呈现与第二对象相关联的对象信息。
在一些实现中,对象信息包括以下中的至少一项:外貌信息,用于描述第二对象的外貌特征;以及状态信息,用于描述以下中的至少一项:第二对象是否已经到达、第二对象的当前位置、第二对象距目的地位置的距离、或第二对象预期到达目的地位置的时间。
在一些实现中,装置700还包括:外貌信息获取模块,被配置为获取与第二对象相关联的外貌信息;以及识别模块,被配置为基于外貌信息,确定实景图像中是否存在第二对象。
在一些实现中,识别模块包括:对象检测模块,被配置为响应于确定第二对象与目的地位置的距离小于预定阈值,基于外貌信息来执行针对实景图像的对象检测,以确定实景图像中是否存在第二对象。
在一些实现中,装置700还包括:对象提示模块,被配置为响应于确定实景图像中存在第二对象,在实景图像的对应区域中呈现第三视觉元素,第三视觉元素指示第二对象在实景图像中的位置。
在一些实现中,装置700还包括:第二提醒模块,被配置为响应于确定当前位置与目的地位置的距离小于预定阈值,呈现关于第一对象已经到达目的地位置附近的第二提醒。
在一些实现中,装置700还包括:会话呈现模块,被配置为与实景图像相关联地呈现会话区域,会话区域被配置为:呈现来自与第二对象相关联的第二终端设备的消息;或生成发送到第二终端设备的消息。
在一些实现中,会话区域响应于以下中的至少一项而被呈现:接收到会话唤起操作;或接收到来自第二终端设备的消息。
在一些实现中,会话区域响应于以下中的至少一项而被自动地收起:在预定时间段内没有接收到针对会话区域的用户操作,且没有接收到来自第二终端设备的消息。
在一些实现中,装置700还包括:第三提醒模块,被配置为响应于确定第二对象的当前位置与目的地位置的距离小于预定阈值,在导航界面中呈现关于第二对象已经到达目的地位置的第三提醒。
装置700中所包括的单元可以利用各种方式来实现,包括软件、硬件、固件或其任意组合。在一些实施例中,一个或多个单元可以使用软件和/或固件来实现,例如存储在存储介质上的机器可执行指令。除了机器可执行指令之外或者作为替代,装置700 中的部分或者全部单元可以至少部分地由一个或多个硬件逻辑组件来实现。作为示例而非限制,可以使用的示范类型的硬件逻辑组件包括现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准品(ASSP)、片上系统(SOC)、复杂可编程逻辑器件(CPLD),等等。
图8示出了其中可以实施本公开的一个或多个实施例的计算设备/服务器800的框图。应当理解,图8所示出的计算设备/服务器800仅仅是示例性的,而不应当构成对本文所描述的实施例的功能和范围的任何限制。
如图8所示,计算设备/服务器800是通用计算设备的形式。计算设备/服务器800的组件可以包括但不限于一个或多个处理器或处理单元810、存储器820、存储设备830、一个或多个通信单元840、一个或多个输入设备850以及一个或多个输出设备860。处理单元810可以是实际或虚拟处理器并且能够根据存储器820中存储的程序来执行各种处理。在多处理器系统中,多个处理单元并行执行计算机可执行指令,以提高计算设备/服务器800的并行处理能力。
计算设备/服务器800通常包括多个计算机存储介质。这样的介质可以是计算设备/服务器800可访问的任何可以获得的介质,包括但不限于易失性和非易失性介质、可拆卸和不可拆卸介质。存储器820可以是易失性存储器(例如寄存器、高速缓存、随机访问存储器(RAM))、非易失性存储器(例如,只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、闪存)或它们的某种组合。存储设备830可以是可拆卸或不可拆卸的介质,并且可以包括机器可读介质,诸如闪存驱动、磁盘或者任何其他介质,其可以能够用于存储信息和/或数据(例如用于训练的训练数据)并且可以在计算设备/服务器800内被访问。
计算设备/服务器800可以进一步包括另外的可拆卸/不可拆卸、易失性/非易失性存储介质。尽管未在图8中示出,可以提供用于从可拆卸、非易失性磁盘(例如“软盘”)进行读取或写入的磁盘驱动和用于从可拆卸、非易失性光盘进行读取或写入的光盘驱动。在这些情况中,每个驱动可以由一个或多个数据介质接口被连接至总线(未示出)。存储器820可以包括计算机程序产品825,其具有一个或多个程序模块,这些程序模块被配置为执行本公开的各种实施例的各种方法或动作。
通信单元840实现通过通信介质与其他计算设备进行通信。附加地,计算设备/服务器800的组件的功能可以以单个计算集群或多个计算机器来实现,这些计算机器能够通过通信连接进行通信。因此,计算设备/服务器800可以使用与一个或多个其他服务器、网络个人计算机(PC)或者另一个网络节点的逻辑连接来在联网环境中进行操作。
输入设备850可以是一个或多个输入设备,例如鼠标、键盘、追踪球等。输出设备860可以是一个或多个输出设备,例如显示器、扬声器、打印机等。计算设备/服务器800还可以根据需要通过通信单元840与一个或多个外部设备(未示出)进行通信,外部设备诸如存储设备、显示设备等,与一个或多个使得用户与计算设备/服务器800交互的设备进行通信,或者与使得计算设备/服务器800与一个或多个其他计算设备通信的任何设备(例如,网卡、调制解调器等)进行通信。这样的通信可以经由输入/输出(I/O)接口(未示出)来执行。
根据本公开的示例性实现方式,提供了一种计算机可读存储介质,其上存储有一条或多条计算机指令,其中一条或多条计算机指令被处理器执行以实现上文描述的方法。
这里参照根据本公开实现的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图 和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理单元,从而生产出一种机器,使得这些指令在通过计算机或其他可编程数据处理装置的处理单元执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其他可编程数据处理装置、或其他设备上,使得在计算机、其他可编程数据处理装置或其他设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其他可编程数据处理装置、或其他设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实现的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实现,上述说明是示例性的,并非穷尽性的,并且也不限于所公开的各实现。在不偏离所说明的各实现的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实现的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其他普通技术人员能理解本文公开的各实现。

Claims (14)

  1. 一种导航方法,包括:
    获取与第一对象相关联的目的地位置,其中所述目的地位置还与第二对象相关联,所述目的地位置指示所述第一对象能够汇合所述第二对象的预定地点;
    确定所述第一对象的当前位置;以及
    提供导航界面,所述导航界面包括叠加在与所述当前位置相关联的实景图像上的导航元素,所述导航元素是基于所述目的地位置和所述当前位置而确定的。
  2. 根据权利要求1所述的方法,其中所述第二对象包括用于出行服务的车辆,并且获取所述目的地位置包括:
    基于与所述第一对象相关联的出行订单,确定与所述出行订单相关联的停靠位置,以作为所述目的地位置。
  3. 根据权利要求1所述的方法,其中确定所述第一对象的当前位置包括:
    获取与所述第一对象相关联的第一终端设备的定位信息,所述定位信息包括惯性导航位置推算信息和辅助定位信息,所述辅助定位信息包括以下中的至少一项:GPS定位信息、视觉定位信息或定位模型的预测信息,
    其中所述定位模型为机器学习模型,所述定位模型被配置为基于所述第一终端设备的惯性导航传感器数据确定所述第一对象的预期位置或预期速度;以及
    基于所述辅助定位信息,校正所述惯性导航位置推算信息,以确定所述第一对象的所述当前位置。
  4. 根据权利要求3所述的方法,其中所述定位模型是基于训练惯性导航传感器数据和对应的真值位置信息而被训练的,所述训练惯性导航传感器数据是由第一设备所获取的,所述真值位置信息是定位准确性高于预定阈值的第二设备所确定的,所述第一设备和所述第二设备被物理上耦合以同步运动。
  5. 根据权利要求3所述的方法,其中基于所述辅助定位信息校正所述惯性导航位置推算信息包括:
    以所述辅助定位信息作为约束,调整用于确定所述惯性导航位置推算信息的状态方程。
  6. 根据权利要求1所述的方法,其中所述导航元素包括定向元素,所述定向元素用于指示所述目的地位置相对于所述当前位置的方向。
  7. 根据权利要求1所述的方法,还包括:
    响应于确定所述当前位置和所述目的地位置的距离小于预定阈值,在所述导航界面中呈现与所述第二对象相关联的对象信息。
  8. 根据权利要求7所述的方法,其中所述对象信息包括以下中的至少一项:
    外貌信息,用于描述所述第二对象的外貌特征;以及
    状态信息,用于描述以下中的至少一项:所述第二对象是否已经到达、所述第二对象的当前位置、所述第二对象距所述目的地位置的距离、或所述第二对象预期到达所述目的地位置的时间。
  9. 根据权利要求1所述的方法,还包括:
    获取与所述第二对象相关联的外貌信息;以及
    基于所述外貌信息,确定所述实景图像中是否存在所述第二对象。
  10. 根据权利要求9所述的方法,其中确定所述实景图像中是否存在所述第二对象包括:
    响应于确定所述第二对象与所述目的地位置的距离小于预定阈值,基于所述外貌信息来执行针对所述实景图像的对象检测,以确定所述实景图像中是否存在所述第二对象。
  11. 一种导航装置,包括:
    目的地位置获取模块,被配置为获取与第一对象相关联的目的地位置,其中所述目的地位置还与第二对象相关联,所述目的地位置指示所述第一对象能够汇合所述第二对象的预定地点;
    当前位置确定模块,被配置为确定所述第一对象的当前位置;以及
    导航模块,被配置为提供导航界面,所述导航界面包括叠加在与所述当前位置相关联的实景图像上的导航元素,所述导航元素是基于所述目的地位置和所述当前位置而确定的。
  12. 一种电子设备,包括:
    存储器和处理器;
    其中所述存储器用于存储一条或多条计算机指令,其中所述一条或多条计算机指令被所述处理器执行以实现根据权利要求1至10中任一项所述的方法。
  13. 一种计算机可读存储介质,其上存储有一条或多条计算机指令,其中所述一条或多条计算机指令被处理器执行以实现根据权利要求1至10中任一项所述的方法。
  14. 一种计算机程序产品,包括计算机可执行指令,其中所述计算机可执行指令在被处理器执行时实现根据权利要求1至10中任一项所述的方法。
PCT/CN2022/071005 2021-01-18 2022-01-10 导航方法和装置 WO2022152081A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110062452.2A CN112781601A (zh) 2021-01-18 2021-01-18 导航方法和装置
CN202110062452.2 2021-01-18

Publications (1)

Publication Number Publication Date
WO2022152081A1 true WO2022152081A1 (zh) 2022-07-21

Family

ID=75756387

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071005 WO2022152081A1 (zh) 2021-01-18 2022-01-10 导航方法和装置

Country Status (2)

Country Link
CN (1) CN112781601A (zh)
WO (1) WO2022152081A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112781601A (zh) * 2021-01-18 2021-05-11 北京嘀嘀无限科技发展有限公司 导航方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105371860A (zh) * 2015-11-17 2016-03-02 广东欧珀移动通信有限公司 一种导航路线的生成方法及终端
US20170139578A1 (en) * 2015-11-18 2017-05-18 Samsung Electronics Co., Ltd System and method for 360-degree video navigation
CN106920079A (zh) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
CN108088450A (zh) * 2016-11-21 2018-05-29 北京嘀嘀无限科技发展有限公司 导航方法及装置
CN109655060A (zh) * 2019-02-19 2019-04-19 济南大学 基于kf/fir和ls-svm融合的ins/uwb组合导航算法及系统
CN112781601A (zh) * 2021-01-18 2021-05-11 北京嘀嘀无限科技发展有限公司 导航方法和装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108332765B (zh) * 2018-01-18 2020-09-22 维沃移动通信有限公司 拼车出行路线生成方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105371860A (zh) * 2015-11-17 2016-03-02 广东欧珀移动通信有限公司 一种导航路线的生成方法及终端
US20170139578A1 (en) * 2015-11-18 2017-05-18 Samsung Electronics Co., Ltd System and method for 360-degree video navigation
CN108088450A (zh) * 2016-11-21 2018-05-29 北京嘀嘀无限科技发展有限公司 导航方法及装置
CN106920079A (zh) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
CN109655060A (zh) * 2019-02-19 2019-04-19 济南大学 基于kf/fir和ls-svm融合的ins/uwb组合导航算法及系统
CN112781601A (zh) * 2021-01-18 2021-05-11 北京嘀嘀无限科技发展有限公司 导航方法和装置

Also Published As

Publication number Publication date
CN112781601A (zh) 2021-05-11

Similar Documents

Publication Publication Date Title
US11692842B2 (en) Augmented reality maps
US11275447B2 (en) System and method for gesture-based point of interest search
US11604069B2 (en) Localizing transportation requests utilizing an image based transportation request interface
KR101932003B1 (ko) 실시간 동적으로 결정된 감지에 기초하여 무인 운전 차량에서 콘텐츠를 제공하는 시스템 및 방법
US11698268B2 (en) Street-level guidance via route path
CN109461208B (zh) 三维地图处理方法、装置、介质和计算设备
WO2021121306A1 (zh) 视觉定位方法和系统
US9424255B2 (en) Server-assisted object recognition and tracking for mobile devices
US11676303B2 (en) Method and apparatus for improved location decisions based on surroundings
KR20170127342A (ko) 자율 주행 차량 내에서 증강 가상 현실 콘텐츠를 제공하는 시스템 및 방법
CN112101339B (zh) 地图兴趣点的信息获取方法、装置、电子设备和存储介质
US9128170B2 (en) Locating mobile devices
JPWO2020039937A1 (ja) 位置座標推定装置、位置座標推定方法およびプログラム
US9791287B2 (en) Drive assist system, method, and program
US20230252689A1 (en) Map driven augmented reality
JP2020086659A (ja) 情報処理システム、プログラム、及び情報処理方法
WO2022152081A1 (zh) 导航方法和装置
US9506768B2 (en) Adaptive route proposals based on prior rides
US11656089B2 (en) Map driven augmented reality
CN112987707A (zh) 一种车辆的自动驾驶控制方法及装置
US20240142239A1 (en) Method, device, system and computer readable storage medium for locating vehicles
CN115527021A (zh) 一种用户定位方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22738962

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/11/2023)