WO2021253995A1 - 一种为用户提供实景图的方法及系统 - Google Patents

一种为用户提供实景图的方法及系统 Download PDF

Info

Publication number
WO2021253995A1
WO2021253995A1 PCT/CN2021/090486 CN2021090486W WO2021253995A1 WO 2021253995 A1 WO2021253995 A1 WO 2021253995A1 CN 2021090486 W CN2021090486 W CN 2021090486W WO 2021253995 A1 WO2021253995 A1 WO 2021253995A1
Authority
WO
WIPO (PCT)
Prior art keywords
real
user
target
scenic spot
candidate
Prior art date
Application number
PCT/CN2021/090486
Other languages
English (en)
French (fr)
Inventor
叶次昌
王立群
李益言
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2021253995A1 publication Critical patent/WO2021253995A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system

Definitions

  • the present invention relates to the field of computer technology, in particular to a method and system for providing users with real-world images.
  • the navigation system can be optimized to better help the user reach the target location.
  • the embodiment of this specification proposes a method and system for providing a user with a real-world image, which can accurately guide the user to the target location.
  • An aspect of the embodiments of the present specification provides a method for providing a user with a real-world image, including: acquiring at least one candidate real-world spot based on a target location; and based on the target location, the user's movement direction, and the at least one candidate real-world spot, Determine whether the preset condition is met; in response to the preset condition being met, determine the target real scenic spot from the candidate real scenic spots corresponding to the predetermined condition being met; determine the target location corresponding to the target real spot based on the target real scenic spot And displaying the real map on the navigation interface related to the user.
  • Another aspect of the embodiments of this specification provides a system for providing users with real-world images, including: an acquisition module for acquiring at least one candidate real-world spot based on a target location; The motion direction and the at least one candidate real scenic spot determine whether a preset condition is met; the determining module is configured to determine a target from candidate real scenic spots corresponding to the preset condition in response to the preset condition being met A real sight; a display module for determining a real picture corresponding to the target location based on the target real sight; and displaying the real picture on a navigation interface related to the user.
  • the device includes a processor and a memory.
  • the memory is used to store instructions.
  • the processor is used to execute the instructions to achieve the aforementioned The operation corresponding to the method of displaying the real image for the user.
  • One aspect of the embodiments of this specification provides a computer-readable storage medium that stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer can implement the corresponding method for displaying real-world images for users as described above. operate.
  • Fig. 1 is a schematic diagram of an application scenario of a real-view image providing system according to some embodiments of this specification
  • Fig. 2 is a schematic diagram of an exemplary computing device according to some embodiments of this specification.
  • Fig. 3 is a schematic diagram of exemplary hardware and/or software of a mobile device according to some embodiments of the present specification
  • Figure 4 is a block diagram of a system for providing users with real-world images according to some embodiments of this specification
  • Fig. 5 is an exemplary flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification
  • Fig. 6 is a schematic diagram of determining preset conditions according to some embodiments of the present specification.
  • Fig. 7 is another exemplary flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification
  • Fig. 8 is an exemplary flow chart for determining whether a preset condition is satisfied according to some embodiments of the present specification
  • Fig. 9 is an example diagram of determining a target real scenic spot according to some embodiments of this specification.
  • Fig. 10 is an exemplary flow chart of displaying a real scene according to some embodiments of the present specification.
  • Fig. 11 is another exemplary flow chart of displaying a real scene according to some embodiments of the present specification.
  • FIGS. 12a and 12b are schematic diagrams showing real scene images according to some embodiments of the present specification.
  • Fig. 13 is an exemplary flow chart of prompting a user with a location relationship according to some embodiments of the present specification
  • FIG. 14 is a diagram illustrating an example of determining the positional relationship between a user and a target according to some embodiments of this specification.
  • Fig. 15 is a schematic diagram of a user-related navigation interface according to some embodiments of this specification.
  • Fig. 16 is another exemplary flowchart for providing a user with a real-world image according to some embodiments of the specification.
  • system used in this specification is a method for distinguishing different components, elements, parts, parts, or assemblies of different levels.
  • the words can be replaced by other expressions.
  • Fig. 1 is a schematic diagram of an application scenario of a real-view image providing system according to some embodiments of this specification.
  • the real map providing system 100 can be applied to a map service system, a navigation system, a transportation system, a traffic service system, and the like.
  • the real-view image providing system 100 can be applied to an online service platform that provides Internet services.
  • the real-world image providing system 100 can be applied to online car-hailing services, such as taxi calls, express calls, private car calls, minibus calls, carpooling, bus services, driver hire, and pick-up services.
  • the real-world image providing system 100 may also be applied to driving services, express delivery, takeaway, and the like.
  • the real image providing system 100 may be an online service platform, including a server 110, a network 120, a terminal 130, and a database 140.
  • the server 110 may include a processing device 112.
  • the server 110 may be used to process information and/or data related to providing real-world images to users.
  • the server 110 may be an independent server or a server group.
  • the server group may be centralized or distributed (for example, the server 110 may be a distributed system).
  • the server 110 may be regional or remote.
  • the server 110 may access information and/or data stored in the terminal 130 and the database 140 via the network 120.
  • the server 110 can be directly connected to the terminal 130 and the database 140 to access the information and/or data stored therein.
  • the server 110 may be executed on a cloud platform.
  • the cloud platform may include one or any combination of private cloud, public cloud, hybrid cloud, community cloud, decentralized cloud, internal cloud, etc.
  • the server 110 may include a processing device 112.
  • the processing device 112 may process data and/or information related to providing the user with real-world images to perform one or more functions described in this application. For example, the processing device 112 may determine at least one candidate real spot based on the target location. For another example, the processing device 112 may determine whether the trigger condition is satisfied based on the user's current location and the target location. For another example, the processing device 112 may determine whether the preset condition is satisfied based on the user's movement direction, target position, and candidate position. For another example, the processing device 112 may determine the angle or/or direction of displaying the real scene image on the navigation interface based on the target position and the candidate position.
  • the processing device 112 may include one or more sub-processing devices (for example, a single-core processing device or a multi-core and multi-core processing device).
  • the processing device 112 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction processor (ASIP), a graphics processing unit (GPU), a physical processor (PPU), a digital signal processor ( DSP), Field Programmable Gate Array (FPGA), Editable Logic Circuit (PLD), Controller, Microcontroller Unit, Reduced Instruction Set Computer (RISC), Microprocessor, etc. or any combination of the above.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • ASIP application specific instruction processor
  • GPU graphics processing unit
  • PPU physical processor
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • PLD Editable Logic Circuit
  • Controller Microcontroller Unit
  • RISC Reduced Instruction Set Computer
  • the network 120 may facilitate the exchange of data and/or information.
  • one or more components in the system 100 may send data and/or information to other components through the network 120.
  • the network 120 may be any type of wired or wireless network.
  • the network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an internal network, an Internet network, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), and a metropolitan area network (MAN) , Public Switched Telephone Network (PSTN), Bluetooth network, ZigBee network, Near Field Communication (NFC) network, etc.
  • PSTN Public Switched Telephone Network
  • WLAN Wireless Local Area Network
  • NFC Near Field Communication
  • the network 120 may include one or more network entry and exit points.
  • the network 120 may include wired or wireless network access points, such as base stations and/or Internet exchange points 120-1, 120-2, ..., through these access points, one or more components of the system 100 can be connected to the network 120 To exchange data and/or information.
  • the user of the terminal 130 may be a service provider.
  • the service provider may be an online ride-hailing driver, a food delivery person, a courier, and so on.
  • the user of the terminal 130 may also be a service user.
  • the service user may include a map service user, a navigation service user, a transportation service user, and so on.
  • the terminal 130 may include one or any combination of a mobile device 130-1, a tablet computer 130-2, a notebook computer 130-3, a vehicle built-in device (not shown), etc.
  • the mobile device 130-1 may include a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, etc., or any combination thereof.
  • the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmets, smart watches, smart clothes, smart backpacks, smart accessories, etc., or any combination thereof.
  • the smart mobile device may include a smart phone, a personal digital assistant (PDA), a game device, a navigation device, a POS device, etc., or any combination thereof.
  • the virtual reality device and/or augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality goggles, augmented reality helmets, augmented reality glasses, augmented reality goggles, etc. or Any combination of the above.
  • the built-in device of the motor vehicle may include a car navigator, a car locator, a driving recorder, etc., or any combination thereof.
  • the terminal 130 may include a device with a positioning function to determine the location of the user and/or the terminal 130.
  • the terminal 130 may include a device with an interface display to display a real-time image for the user of the terminal.
  • the terminal 130 may include a device having an input function for the user to input a target location.
  • the database 140 may store data and/or instructions. In some embodiments, the database 140 may store information obtained from the terminal 130. In some embodiments, the database 140 may store information and/or instructions for execution or use by the server 110 to perform the exemplary methods described in this application. In some embodiments, the database 140 may store real sights (ie, location information of the real sights (eg, latitude and longitude coordinates)), real pictures corresponding to the real sights, or real picture display angles or directions, or correction algorithms, etc. In some embodiments, the database 140 may include mass memory, removable memory, volatile read-write memory (for example, random access memory RAM), read-only memory (ROM), etc., or any combination thereof. In some embodiments, the database 140 may be implemented on a cloud platform. For example, the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, internal cloud, etc. or any combination of the above.
  • the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, internal cloud, etc. or any combination of the above.
  • the database 140 may be connected to the network 120 to communicate with one or more components of the system 100 (for example, the server 110, the terminal 130, etc.).
  • One or more components of the system 100 can access data or instructions stored in the database 140 via the network 120.
  • the server 110 may obtain a real scenic spot or a real-world image corresponding to the real scenic spot from the database 140 and perform corresponding processing.
  • the database 140 may directly connect or communicate with one or more components (eg, the server 110 and the terminal 130) in the system 100.
  • the database 140 may be part of the server 110.
  • Fig. 2 is a schematic diagram of an exemplary computing device according to some embodiments of the present specification.
  • the server 110 and/or the requester terminal 130 may be implemented on the computing device 200.
  • the processing device 112 may implement and execute the functions of the processing device 112 disclosed in this application on the computing device 200.
  • the computing device 200 may include a bus 210, a processor 220, a read-only memory 230, a random access memory 240, a communication port 250, an input/output interface 260, and a hard disk 270.
  • the processor 220 may execute calculation instructions (program code) and execute the functions of the real-view image providing system 100 described in this application.
  • the calculation instructions may include programs, objects, components, data structures, procedures, modules, and functions (the functions refer to specific functions described in this application).
  • the processor 220 may process image or text data obtained from any other components of the real-view image providing system 100.
  • the processor 220 may include a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a central processing unit (CPU) , Graphics processing unit (GPU), physical processing unit (PPU), microcontroller unit, digital signal processor (DSP), field programmable gate array (FPGA), advanced RISC machine (ARM), programmable logic device, and Any circuits, processors, etc., that perform one or more functions, or any combination thereof.
  • RISC reduced instruction set computer
  • ASIC application specific integrated circuit
  • ASIP application specific instruction set processor
  • CPU central processing unit
  • GPU Graphics processing unit
  • PPU physical processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ARM advanced RISC machine
  • programmable logic device any circuits, processors, etc., that perform one or more functions, or any combination thereof.
  • the computing device 200 in FIG. 2 only describes one processor, but it should be noted that
  • the memory of the computing device 200 may store data/information acquired from any other components of the reality map providing system 100.
  • exemplary ROMs may include mask ROM (MROM), programmable ROM (PROM), erasable programmable ROM (PEROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM), and digital Universal disk ROM, etc.
  • Exemplary RAM may include dynamic RAM (DRAM), double rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T-RAM), zero capacitance (Z-RAM), and the like.
  • the input/output interface 260 may be used to input or output signals, data or information. In some embodiments, the input/output interface 260 may enable the user to communicate with the real-view image providing system 100. In some embodiments, the input/output interface 260 may include an input device and an output device. Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, etc., or any combination thereof. Exemplary output devices may include display devices, speakers, printers, projectors, etc., or any combination thereof. Exemplary display devices may include liquid crystal displays (LCD), light emitting diode (LED)-based displays, flat panel displays, curved displays, television equipment, cathode ray tubes (CRT), etc., or any combination thereof.
  • LCD liquid crystal displays
  • LED light emitting diode
  • CRT cathode ray tubes
  • the communication port 250 can be connected to a network for data communication.
  • the connection may be a wired connection, a wireless connection, or a combination of both.
  • Wired connections can include cables, optical cables, or telephone lines, etc., or any combination thereof.
  • the wireless connection may include Bluetooth, Wi-Fi, WiMax, WLAN, ZigBee, mobile networks (for example, 3G, 4G, or 5G, etc.), etc., or any combination thereof.
  • the communication port 250 may be a standardized port, such as RS232, RS485, and so on.
  • the communication port 250 may be a specially designed port.
  • Fig. 3 is a schematic diagram of exemplary hardware and/or software of a mobile device according to some embodiments of the present specification.
  • the mobile device 300 may include a communication unit 310, a display unit 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an input/output unit 350, a memory 360, a storage unit 370, and the like.
  • the operating system 361 for example, iOS, Android, Windows Phone, etc.
  • the application program 362 may include a browser or an application program for receiving text, image, audio, or other related information from the real-view image providing system 100.
  • a computing device or a mobile device can be used as a hardware platform for one or more components described in this application.
  • the hardware components, operating systems, and programming languages of these computers or mobile devices are conventional in nature, and those skilled in the art can adapt these technologies to the real image providing system described in this application after being familiar with these technologies.
  • a computer with user interface elements can be used to implement a personal computer (PC) or other types of workstations or terminal devices, and if properly programmed, the computer can also act as a server.
  • PC personal computer
  • Fig. 4 is a block diagram of a system for providing users with real-world images according to some embodiments of the present specification.
  • a system (such as the processing device 112) that provides a user with a real-world image may include an acquisition module 410, a judgment module 420, a determination module 430, a display module 440, and a reminder module 450.
  • the acquiring module 410 is configured to acquire at least one candidate real scenic spot based on the target location. In some embodiments, the acquiring module 410 is further configured to: acquire at least one candidate real scenic spot to be corrected based on the target position; correct the at least one candidate real scenic spot to be corrected using a correction algorithm to obtain the at least one candidate Real attractions.
  • the judging module 420 is configured to judge whether a preset condition is satisfied based on the target position, the user's moving direction and the at least one candidate real spot. In some embodiments, the judging module is further configured to: determine a first direction based on the candidate spot and the target location; determine the first direction based on the angle between the first direction and the direction of movement of the user Whether the preset conditions are met. In some embodiments, the judging module is further configured to: based on the order of the distances between the at least one candidate real spot and the target location, and on the basis of the movement direction, the target position, and the candidate real spot, judging Whether the preset requirement is satisfied; when the judgment result is satisfied, the judgment is stopped; otherwise, judgment is made for another candidate real scenic spot. In some embodiments, the judging module is further used for judging whether a triggering condition is satisfied based on the current location of the user and the target location; and, determining the target real scenic spot only when the triggering condition is satisfied.
  • the determining module 430 is configured to, in response to the preset condition being satisfied, determine the target real sight spot from the candidate real sight spots corresponding to the preset condition being satisfied.
  • the display module 440 is configured to determine a real scene map corresponding to the target location based on the target real spot; and display the real scene map on a navigation interface related to the user. In some embodiments, the display module 440 is further configured to: determine the display direction and/or the included angle of the real-view image based on the target location and the target real sight; based on the display direction and/or the included angle, The real scene image is displayed on the navigation interface. In some embodiments, the display module 440 is further configured to: determine the zooming or zooming parameter of the real-view image based on the current position of the user and the target position; based on the zooming or zooming parameter, in the The real scene image is displayed on the navigation interface.
  • the reminding module 450 is configured to: determine the projection point of the target location on the route where the target real scenic spot is located; determine the positional relationship between the user and the target location according to the movement direction and the projection point; Describe the position relationship of the user. In some embodiments, the reminding module 450 is further configured to: calculate the distance between the current location of the user and the target spot, to determine the user's exercise progress; and to remind the user of the exercise progress.
  • system and its modules shown in FIG. 4 can be implemented in various ways.
  • the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
  • the hardware part can be implemented using dedicated logic;
  • the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
  • processor control codes for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware)
  • Such codes are provided on a programmable memory or a data carrier such as an optical or electronic signal carrier.
  • the system and its modules in this specification can not only be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. It can also be implemented by, for example, software executed by various types of processors, or can be implemented by a combination of the above-mentioned hardware circuit and software (for example, firmware).
  • the acquisition module 410, the judgment module 420, the determination module 430, the display module 440, and the reminder module 450 disclosed in FIG. 4 may be different modules in the same system, or one module may implement the functions of the above two modules.
  • each module in a system that provides users with real-world images may share a storage module, and each module may also have its own storage module. Such deformations are all within the protection scope of this specification.
  • Fig. 5 is an exemplary flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification. As shown in FIG. 5, the process 500 includes the following steps. In some embodiments, the process 500 may be executed by a processing device (for example, the processing device 112).
  • a processing device for example, the processing device 112
  • Step 510 Obtain at least one candidate real scenic spot based on the target location. In some embodiments, this step 510 may be performed by the obtaining module 410.
  • the target location may represent the location the user wants to reach.
  • the target location may indicate the destination that the user wants to reach through navigation.
  • the target location may indicate the passenger's boarding point (that is, the location where the driver picks up the passenger).
  • the user can be any user who uses maps or navigation. For example, in shared travel, drivers who provide services to passengers, etc.
  • the target location may include target location information.
  • the location information may include, but is not limited to, name information and coordinate information.
  • the coordinate information may include latitude and longitude coordinate information, for example, GNSS (Global Navigation Satellite System, Global Navigation Satellite System) coordinates or GPS (Global Positioning System, Global Positioning System) coordinates, etc.
  • processing is based on the target location, for example, determining the distance from the user's current location, determining the relationship between the user's motion direction and the candidate real-spots, etc., are actually processed based on the target location information.
  • certain specific nouns may include information related to the noun. For example, real sights, candidate real sights, target real sights or the user's current location, real pictures, etc.
  • operations involving specific nouns are actually performed on information related to them, and will not be described in detail later.
  • the target location can be obtained by the user's input in the terminal, can also be directly read from a storage device (for example, the database 140, etc.), or can be obtained by calling the corresponding map service according to the API interface. This embodiment does not limit the method of acquiring the target position.
  • the actual scenic spot refers to the actual shooting location in the real environment (for example, the environment such as a city, a business district, or a street).
  • the actual point of view is a point on the road.
  • a real scenic spot is determined every certain distance (for example, 5 meters), and real scene shooting is performed.
  • An image taken based on a real scenic spot or an image obtained after processing the taken image is called a real scene image.
  • images are collected from the real spot in various directions, that is, the real scene can be a 360° panoramic image, which can also be called a panoramic image.
  • a real scene picture (or a panoramic picture) is taken around a circle in the axial direction and the longitudinal direction with the real scenic spot as the center.
  • the real scene image (or panoramic image) can be obtained in advance after continuous collection and processing on the road network by a panoramic collection vehicle.
  • the real scene map (or panoramic picture) uses images collected by a panoramic camera at the corresponding real scenic spot.
  • the real sights and their corresponding real maps can be obtained from a predetermined map service, for example, by calling a predetermined map service through an API interface.
  • the real sights and their corresponding real pictures can also be directly obtained from the storage device (for example, the database 140, etc.).
  • embedding multiple real-site coordinates and their corresponding panoramic images in a terminal or server can be directly read from the terminal or server.
  • There are other ways to obtain the real scenic spot and its corresponding real map which is not limited in this embodiment.
  • At least one candidate real scenic spot related to the target location may be obtained, and the relationship with the target location may be that the relationship with the target location meets a preset requirement.
  • the preset requirements include but are not limited to: the distance between the candidate real spot and the target location is less than a threshold (for example, 5 meters or 10 meters, etc.) or/and there is no obstruction between the candidate real spot and the target location (for example, Building or construction site, etc.) etc.
  • the current information of the user may also be considered when determining the candidate real sights. It can be understood that the candidate real sights may be real sights related to the target location and the current location of the user.
  • map matching matching the location information to the road network
  • location information e.g., coordinate information
  • positioning technology e.g., GPS or GNSS positioning systems
  • the location on the road network may be Not displayed on the road network (for example, in a house beside the road or in a pond, etc.)
  • call a predetermined map service through the API interface to obtain the candidate real spot coordinates returned by the predetermined map service according to the target location, due to the coordinates returned by the map service
  • the obtained candidate real scenic spot may be a real scenic spot obtained after correction. Specifically, based on the target position, at least one candidate real scenic spot to be corrected is obtained; the correction algorithm is used to correct at least one candidate real scenic spot to be corrected to obtain at least one candidate real scenic spot.
  • the candidate real scenic spot to be corrected refers to the real scenic spot obtained based on the positioning technology, and the relationship with the target location meets the above-mentioned preset requirements.
  • the correction algorithm may refer to an algorithm that associates the candidate real scenic spot to be corrected to the road network.
  • the correction algorithm can be used to associate the GNSS or GPS coordinates returned by the map service to the road network of the map, that is, to convert the coordinate sequence sampled by the GNSS or GPS to the road network coordinate sequence to correct the coordinates returned by the map service.
  • the correction algorithm may be an algorithm that associates the candidate real scenic spot to be corrected to the nearest route.
  • the candidate real scenic spot to be corrected is directly projected onto the nearest route, and the projection point is used as the corrected route.
  • Real attractions ie, candidate real attractions).
  • the route is a component of the road network in the map, and the route can be regarded as the road in the map.
  • the correction algorithm may be Hidden Markov Model (HMM), ST-Matching algorithm, or IVVM ((An Interactive-Voting Based Map Matching Algorithm) algorithm, etc.).
  • step 310 may also include a step of judging whether the trigger condition is satisfied based on the current position of the user and the target position, and the above step 310 is executed only when the trigger condition is satisfied.
  • the current location of the user can be obtained through the user terminal. It is understandable that the current location of the user may also be referred to as the current location of the terminal.
  • the terminal can be a driver’s mobile terminal or other vehicle-mounted equipment.
  • the current location of the terminal can be determined according to the positioning device of the terminal or the positioning device on the vehicle, or it can be obtained by calling the corresponding map service according to the API interface. Do restrictions.
  • the step of determining whether the trigger condition is satisfied may include: determining whether the distance between the user's current position and the target position is less than or equal to a first threshold (for example, 50 meters).
  • a first threshold for example, 50 meters.
  • the distance may be a route distance (i.e., the length of the route in the road network on the map), or the distance may be a straight line distance. This embodiment is not limited.
  • the first threshold can be set according to actual application scenarios, which is not limited in this embodiment.
  • the judgment result is yes, then start to perform the above step 310 and subsequent steps to determine the actual scenic spot corresponding to the target location; if the judgment result is no, continue to judge whether it is satisfied based on the distance between the current location and the target location Triggering conditions.
  • the target location when the target location is close (for example, less than 50m), it usually enters the light navigation stage, that is, adjust the ratio of the map to the actual distance, and control the navigation interface to display the current location and destination of the user (for example, the driver) Location, navigation route, estimated time of arrival and other information.
  • the first threshold may be set to the distance between the current position of the user and the target position when switching light navigation, that is, in response to the navigation interface entering the light navigation stage, the target corresponding to the target position is determined according to the user's movement direction. The coordinates of the actual scenic spot.
  • Step 520 based on the target location, the user's movement direction, and at least one candidate real scenic spot, determine whether a preset condition is satisfied. In some embodiments, this step 520 may be performed by the judgment module 420.
  • the preset condition can mean that when the user arrives at the candidate spot, the user (including the body or head) rotates at an angle between plus and minus 90° (where the user's movement direction is 0°), and the target position can be observed Corresponding conditions.
  • the preset condition may also be that the candidate real scenic spot is located behind the target real scenic spot (including directly behind or diagonally behind) and in front of the user's current location.
  • the preset condition may be that the candidate real sights are located in the target route segment, where the target route segment may be determined based on the projection point of the target location on the route where the user is located, and the current location of the user.
  • the target route segment refers to the route between the user's current location and the target location.
  • the projection can be a perpendicular line to the route through the target position, and the intersection point of the perpendicular line and the route is the projection point.
  • the target route segment may also be determined based on the projection point of the target position on the route where the user is located, the current position of the user, and the direction of movement.
  • the target route segment refers to the route between the user's current position and the target position and in front of the direction of movement.
  • L1 represents the target location
  • L2 represents the user's current location
  • arrow c represents the user's movement direction
  • A1, A2, and A3 represent candidate real spots
  • P represents the projection point of the target location L1 on the user's route.
  • the target route segment may represent the road segment between the user's current position L2 and the projection point P.
  • the method of determining whether the preset condition is satisfied may also include: determining the angle between the first direction determined based on the candidate real spot and the target location and the direction of movement of the user, for example, determining the angle Whether the relationship with the threshold angle meets the conditions.
  • determining the angle between the first direction determined based on the candidate real spot and the target location and the direction of movement of the user for example, determining the angle Whether the relationship with the threshold angle meets the conditions.
  • the method of determining whether the preset condition is satisfied may also include: based on the order of the distance between the at least one candidate real spot and the target location, and based on the user's movement direction, the target position, and the candidate real spot, judging the forecast. Suppose whether the requirements are met, when the judgment result is satisfied, stop the judgment, otherwise make judgments for other candidate real scenic spots. For example, the order of distance size can be from small to large.
  • the candidate real scenic spot that meets the preset condition can be referred to as the "candidate real scenic spot corresponding to the preset condition being satisfied" can be further determined as the target real scenic spot.
  • the judgment process in this way can avoid some unnecessary judgment processes, which helps to speed up the determination of the target actual scenic spot.
  • Step 530 in response to the preset condition being met, determine the target real scenic spot from the candidate real scenic spots corresponding to the preset condition being satisfied. In some embodiments, this step 530 may be performed by the determining module 430.
  • the at least one candidate real sights There may be one or more of the at least one candidate real sights.
  • the relationship between the candidate real sights and the direction of movement of the user and the target position meets the preset requirements, that is, there may be one or more candidate real sights corresponding to the preset conditions being met.
  • Piece when there is only one candidate real scenic spot corresponding to the preset condition being met, it can be directly used as the target real scenic spot.
  • the target real scenic spot can be further filtered out. For example, one can be selected randomly from the candidate real scenic spots corresponding to the preset conditions being met as the target real scenic spot.
  • the candidate real scenic spots corresponding to the preset conditions can be sorted, and the closest distance can be selected as the target real scenic spot.
  • both passengers and drivers display the real-view map of the target real spot, and among the candidate real spots corresponding to the preset conditions are met, the candidate real spot closest to the target location is determined as the candidate real spot Target actual scenic spots, as far as possible to make it easier for passengers to confirm the pick-up point, and realize the driver's safe driving.
  • Step 540 based on the target real scenic spot, determine the real scene map corresponding to the target location; and display the real scene map on the user-related navigation interface. In some embodiments, this step 540 may be performed by the display module 440.
  • the real-world image is obtained based on the shooting of the real scenic spot, that is, the real-world scenic spot has a corresponding real image.
  • the display module 440 may obtain the real-world image corresponding to the target real-world scenic spot (see step 510 for details). For example, a predetermined map service is called according to an API interface, and a real-world image (or panoramic image) returned by the predetermined map service according to the coordinates of the target real scenic spot is obtained.
  • the returned real scene image (or panoramic image) may be an image or a panoramic image collected in various directions from the target real scenic spot as the collection point. Further, the display module 440 acquires the real scene corresponding to the target location based on the real scene of the target real scenic spot.
  • the display module 440 may directly use the real-world image of the target real scenic spot as the real-world image corresponding to the target location.
  • the display module 440 can process the real scene image and use the processed real scene image as the real scene image corresponding to the target location.
  • processing includes, but is not limited to: image processing methods such as zooming in, zooming out, adjusting resolution, adjusting saturation, adjusting brightness, or cropping.
  • the display module 440 may use the real image of the target real scenic spot or a partial image of the real image obtained after processing the real image of the target real spot at a specific viewing angle as the real image corresponding to the target location. In some embodiments, the display module 440 may determine a specific visual angle according to the target location, the target real spot, and the user's movement direction, and use the real image of the target real spot at the specific viewing angle as the real spot corresponding to the target position. . For example, the angle between the first vector and the second vector is used to determine a specific visual angle, where the first vector is determined by the candidate real spot and the target location, and the second vector is determined by the user's movement direction. In this embodiment, the corresponding visual angle is determined according to the angle between the first vector and the second vector, so that the driver can more accurately observe the street scene around the target location based on the real-view image under the visual angle.
  • the real scene image displayed on the navigation interface may be all or part of the scene of the target real spot, that is, all or part of the real scene of the candidate real spot is displayed.
  • the real image corresponding to the target position may be preprocessed, such as viewing angle adjustment, adjusting resolution, adjusting brightness, adjusting saturation, zooming in or zooming in, etc.
  • viewing angle adjustment please refer to Fig. 10 of this specification and its related content
  • zooming or zooming refer to Fig. 11 of this specification and its related content.
  • the user's operating instructions can be received, and the real-world image displayed to the user can be adjusted according to the user's operating instructions, for example, instructions for changing the real image or displaying the real image.
  • the direction of movement of the user can be updated in real time or at predetermined time intervals.
  • the target real scenic spot and the real picture corresponding to the target location, etc. It is understandable that the real-world image displayed on the navigation interface can be changed following (for example, following the user's movement in real time).
  • the user By showing the real scene corresponding to the target location to the user through the navigation interface, the user (for example, the driver) is assisted in finding the target location.
  • Fig. 7 is another flowchart of a method for providing a user with a real-world image according to some embodiments of the present application. As shown in FIG. 7, the process 700 includes the following steps. In some embodiments, the process 700 may be executed by a processing device (for example, the processing device 112).
  • a processing device for example, the processing device 112
  • Step 710 Acquire the current location and target location of the user. In some embodiments, this step 710 may be performed by the obtaining module 410.
  • step 510 For obtaining the user's current location and target location, see step 510 and related descriptions.
  • Step 720 In response to the relationship between the user's current position and the target position satisfying the trigger condition, determine the target real scenic spot coordinates corresponding to the target position according to the user's movement direction. In some embodiments, this step 720 may be performed by the determining module 430.
  • step 510 For more details of the trigger condition, refer to step 510 in Figure, which will not be repeated here.
  • step 720 may specifically include: in response to the relationship between the user's current position and the target position satisfying the trigger condition, obtaining candidate real-spot coordinates around the target location, and responding to the candidate real-spot coordinates, the target position, and the user's movement
  • the relationship between the directions satisfies a predetermined condition, and the candidate real scenic spot coordinates are determined as the target real scenic spot coordinates corresponding to the target position.
  • the candidate real-spots coordinates around the target location are sequentially obtained according to the distance between the candidate real-spots and the target position, until the obtained candidate real-spot coordinates and the target position are obtained.
  • the relationship with the user's movement direction satisfies a predetermined condition. For more details about the preset conditions, refer to step 520.
  • obtaining the coordinates of the candidate real attractions around the target location includes: obtaining the coordinates returned by the predetermined map service according to the target location, and correcting the returned coordinates according to a predetermined correction model to obtain the coordinates of the candidate real attractions around the target location.
  • a predetermined correction model For more details about the correction model, refer to step 510 and its related description.
  • the target location is located on the left or right side of the terminal based on the user's movement direction, target location, and coordinates of the target real spot, which helps to improve the efficiency of the driver in finding the target location based on the real map corresponding to the coordinates of the target real spot. . See Figure 13 and related descriptions for details.
  • determine whether the candidate real spot spot is determined according to the current position of the terminal, the user motion direction, and the candidate real spot coordinates. In front of the user, if the candidate real spot is in front of the user, it is judged whether the predetermined conditions are met. If the candidate real spot is behind the user, it can be judged that the vehicle has passed the target position, which can make the user navigate to the user Give a reminder to drive past the target location.
  • the candidate real spot coordinates closest to the target location in response to the user's current location meeting the trigger condition, obtain the candidate real spot coordinates closest to the target location, and determine whether the candidate real spot is located in the user motion direction according to the user's current location, the user's movement direction, and the candidate real spot coordinates In front of, when the candidate real spot is in front of the direction of movement of the terminal, it is judged whether the predetermined condition is met.
  • Step 730 Determine the real scene map corresponding to the coordinates of the target real scenic spot. In some embodiments, this step 730 may be performed by the display module 440.
  • step 540 Refer to step 540 and related descriptions for specific details of determining the real scene map corresponding to the coordinates of the target real scenic spot.
  • Step 740 Send the real scene image to the navigation interface for display. In some embodiments, this step 740 may be performed by the display module 440.
  • the navigation interface may be the navigation interface of the user's mobile terminal, or the navigation interface of other in-vehicle devices, and may be displayed in full screen or partially, which is not limited in this embodiment. Refer to Figure 10 and Figure 11 for more details on displaying the real-world image on the navigation interface.
  • Fig. 8 is an exemplary flowchart for determining whether a preset condition is satisfied according to some embodiments of the present specification. As shown in FIG. 8, the process 800 includes the following steps. In some embodiments, the process 800 may be executed by the judgment module 420 in the processing device (for example, the processing device 112).
  • Step 810 Determine a first direction based on the candidate real sights and the target location.
  • the starting point of the first direction may be a candidate real scenic spot or a target location.
  • Step 820 Determine whether the preset condition is satisfied according to the angle between the first direction and the direction of movement of the user.
  • the preset condition is related to the included angle.
  • the preset condition can be correspondingly set according to the starting point in the first direction. For example, when the starting point of the first direction is a candidate real spot, the preset condition is that the angle between the first direction and the user's movement direction is less than or equal to the angle threshold.
  • the angle threshold is equal to or less than 90° (e.g., 90° , 60° or 30°, etc.).
  • the preset condition is that the angle between the first direction and the user's movement direction is greater than or equal to the angle threshold.
  • the angle threshold is equal to or greater than 90° (e.g., 90° , 100° or 130°, etc.).
  • the preset condition may be determined according to the vector start point or the vector end point of the first vector and the second vector.
  • the first vector is determined by the candidate real sights and the target location
  • the second vector is determined by the user's movement direction.
  • this embodiment does not limit the vector starting point of the first vector and the second vector
  • the angle threshold may be determined according to the vector starting point or the vector end point of the first vector and the second vector.
  • the movement direction of the user may be the movement direction of the user on each road segment in the navigation route.
  • the predetermined condition is that the angle between the first vector and the second vector is less than or equal to the angle threshold.
  • the vector starting point of the first vector is the coordinates of the candidate real scenic spot
  • the vector starting point of the second vector is the current position of the user.
  • the angle threshold is 90°.
  • the user's current location is L2 and the target location is L1.
  • the distance between the user's current location L2 and the target location L1 is less than the first threshold, the coordinates of the candidate real spot A1 closest to the target location L1 are acquired.
  • the distance between them is the candidate real scenic spot A1, the candidate real scenic spot A2, and the candidate real scenic spot A3 from near to far.
  • the user's movement direction c, and the coordinates of the candidate real spot A1 it can be determined that the candidate real spot A1 is located in front of the terminal's movement direction. Then, the first vector a1 is determined according to the coordinates of the candidate real spot A1 and the target position L1, the second vector b is determined according to the user movement direction c, and the angle ⁇ between the first vector a1 and the second vector b is calculated, and the angle ⁇ is greater than 90 °, when the user (for example, the driver) moves to the candidate real scenic spot A1, he needs to observe obliquely to the rear to discover the target location L1 and the surrounding streetscape, which will affect safety.
  • the candidate real scenic spot A1 cannot be used as the target real scenic spot. Afterwards, the coordinates of the candidate real spot A2 that is closer to the target location L1 are obtained, and the candidate real spot A2 can be determined to be in front of the user's movement direction according to the user's current position L1, the user's movement direction, and the coordinates of the candidate real spot A2.
  • the first vector a2 is determined according to the coordinates of the candidate real spot A2 and the target position L1
  • the second vector b is determined according to the user's movement direction
  • the angle ⁇ between the first vector a2 and the second vector b is calculated, and the angle ⁇ is less than 90° Therefore, when the user moves to the candidate real scenic spot A2, observing forward can find the target location L2 and the surrounding street scenes. Therefore, the candidate real scenic spot A2 can be determined as the target real scenic spot.
  • Fig. 10 is an exemplary flow chart of displaying a real scene according to some embodiments of the present specification. As shown in FIG. 10, the process 1000 includes the following steps. In some embodiments, the process 1000 may be executed by the display module 440 in the processing device (for example, the processing device 112).
  • Step 1010 Determine the display direction and/or included angle of the real scene image based on the target location and the target real spot.
  • the real image display contains two angles: horizontal angle of view and pitch angle.
  • the horizontal viewing angle may be the angle at which the real image is displayed in the established two-dimensional plane based on the horizontal ground.
  • the true north direction is 0° and the true south direction is 180°.
  • the horizontal viewing angle can also be referred to as the horizontal display direction.
  • the pitch angle refers to the angle with the horizontal ground.
  • the real image of the target spot can be a 360° panoramic image. If the 360° panoramic image of the target spot is determined as the real image corresponding to the target location, the real image corresponding to the target location can be displayed by displaying the direction and/or angle, which is convenient for users Combining with the displayed real image, confirm the target location by observing the real street scene or environment.
  • the direction determined with the target real scenic spot as the starting point and the target location as the end point is used as the horizontal direction displayed on the navigation interface of the real image corresponding to the target location. For example, rotating the initial direction of the real image to the determined horizontal direction to display the real image.
  • the initial direction refers to the default or set initial direction (for example, true north direction) for the map service to display the real-view image.
  • the initial direction can be set according to specific application scenarios, which is not limited in this embodiment.
  • the angle between the direction determined with the target real sight point as the starting point and the target location as the end point and the initial direction of the real scene image is used as the horizontal viewing angle of the real scene image corresponding to the target position displayed on the navigation interface. For example, rotating the initial direction of the real image by the angle of the included angle, and displaying the real image.
  • the pitch angle or the vertical direction may be a default value, for example, 0°, etc., or determined based on user selection.
  • Step 1020 based on the display direction and/or the included angle, display the real scene image on the navigation interface.
  • the real-world image before displaying the real-world image corresponding to the target location, the real-world image may be processed.
  • the processing may include one or a combination of zooming, cropping, adjusting resolution, adjusting brightness, and adjusting saturation.
  • the display module 440 may adjust the angle or direction in real time according to the relative position change between the user's current position and the target position. Understandably, during the adjustment process, ensure that the target position is in the displayed content. In some embodiments, the display module 440 may adjust the angle or direction in real time according to the user's feedback instruction or operation. For example, the user may perform operations such as rotating or moving the displayed real-time image, or directly issuing an angle or direction adjustment instruction.
  • Fig. 11 is another exemplary flow chart of displaying a real scene according to some embodiments of the present specification. As shown in FIG. 11, the process 1100 includes the following steps. In some embodiments, the process 1100 may be executed by the display module 440 in the processing device (for example, the processing device 112).
  • Step 1110 based on the user's current location and target location, determine the thumbnail or zoom parameter of the real image.
  • the display module 440 may determine the zooming or zooming parameter of the real scene image based on the distance between the current position of the user and the target position.
  • the size of the abbreviated parameter is proportional to the size of the distance.
  • the size of the enlarged parameter is inversely proportional to the size of the distance.
  • the reduction or enlargement parameter is a reduction parameter or enlargement parameter relative to the size of the original image, and the parameter may be a multiple, a ratio, and the like.
  • the zooming or zooming parameter at a specific distance may be determined based on a zooming algorithm.
  • the display module 440 may determine the zooming or zooming parameter of the real scene based on the distance between the user's current position and the target real scenic spot.
  • the specific method is similar to that based on the distance between the user's current position and the target position, determining the reduction or enlargement parameters of the real image.
  • zoom in and out of the real-world image displayed on the page In some cases, the farther the user is from the target location, the user may have greater demand for viewing route, road conditions and other information on the navigation page. At this time, the size of the actual image displayed can be reduced on the premise that it can be clearly displayed. deal with. The closer the user is to the target location, the greater the user's need to use the real image, and the displayed real image can be enlarged.
  • Step 1120 based on the zooming or zooming parameter, display the real scene image on the navigation interface.
  • the real-life image corresponding to the target location may be displayed on the navigation interface.
  • the display module 440 may zoom the real-time image corresponding to the target position in real time during the display process, and then display the zoomed-in real image on the navigation interface, or display the zoomed-in real image with a certain horizontal viewing angle and The pitch angle is displayed on the navigation interface.
  • the display module 440 may store real-view images corresponding to the reduced or enlarged parameters at different distances in a storage device (for example, the database 140, etc.), and during the display process, directly read the corresponding real-view images according to the distance for display.
  • the size of the real scene image can also be adjusted according to the user's operation instruction, so as to determine the size of the real scene image displayed to the user according to the user's needs, so as to ensure the user's experience.
  • the user's current location is L2, and the target location is L1.
  • the real image corresponding to the target location can be abbreviated, and then the thumbnail image can be displayed to the user to facilitate the user to view the map information.
  • the real scene image can be enlarged, and the enlarged real scene image can be shown to the user, so that the user can confirm the target position.
  • the display module may also determine the corresponding image processing means according to the zooming or zooming parameter to ensure that the image displayed on the navigation interface is clear. For example, if the magnification is greater than the threshold (for example, 1x), the image is sharpened to make the outline of the image clear.
  • the threshold for example, 1x
  • Fig. 13 is an exemplary flow chart for prompting a user with a location relationship according to some embodiments of the present specification. As shown in FIG. 13, the process 1300 includes the following steps. In some embodiments, the process 1300 may be executed by the reminder module 450 in the processing device (for example, the processing device 112).
  • Step 1310 Determine the projection point of the target position on the route where the target real scenic spot is located.
  • the corresponding route may be a route related to the target location. For example, the route closest to the target location in the road network.
  • the target location may be the route where the target actual scenic spot is located.
  • the corresponding route may be the route where the user is located.
  • a perpendicular line is drawn to the corresponding route through the target position, and the perpendicular point is determined as the projection point of the target position on the route. For example, in response to the distance between the current position of the terminal and the target position being less than the first threshold, the projection point of the target position on the corresponding route is determined.
  • Step 1320 Determine the positional relationship between the user and the target position according to the movement direction and the projection point.
  • an angle rotated clockwise by 0-180° is regarded as a positive angle
  • an angle rotated by 0-180° counterclockwise is regarded as a negative angle.
  • the left-right positional relationship between the target position and the user can be determined by judging whether the angle between the second direction and the movement direction is positive or negative.
  • the second direction may be determined by the target position and the projection point, or the target position and the current position of the user.
  • the second direction is based on the projection point as the starting point, and the target position is the direction determined by the end point.
  • the target location may be exactly on the route the user is currently driving.
  • the angle between the first direction and the moving direction is 0°, it may represent that the target position is in the middle of the road. Understandably, in order to ensure driving or traffic safety, it is possible to determine whether the target location is on the left or right of the user according to local traffic regulations. For example, in China, the traffic rule is to park on the right and drive on the right, and the determined location relationship is that the target location is on the right side of the user.
  • Step 1330 remind the user of the location relationship.
  • reminding the user of the location relationship may include: controlling the terminal to display, broadcasting (for example, voice broadcasting) the location relationship, or sending information, and so on.
  • broadcasting for example, voice broadcasting
  • the target position is L1
  • the user's current position is L2
  • the user's movement direction is c
  • a perpendicular line is drawn to the corresponding route through the target position L1 to obtain the projection point P.
  • the target position L1 is L2
  • the user motion direction is c
  • the target position L1 is located on the right side of the user (that is, the right side of the vehicle), or according to the projection point P as
  • the starting point and target position L1 is the angle ⁇ between the first direction determined by the end point and the user's movement direction c is 90°
  • the target position L1 is located on the right side of the user.
  • the position relationship of the target location L1 on the right side of the user is voice broadcasted to remind the user to observe the street view on the right side to confirm the target location.
  • the position relationship may be displayed on the user's navigation interface, or the position relationship may be broadcast by voice, so that the driver can accurately reach the target position.
  • the reminding module 450 may calculate the distance between the user's current location and the target actual scenic spot, determine the user's exercise progress, and remind the user of the exercise progress. For example, the reminder module 450 may determine the user's exercise progress bar according to the calculation of the distance between the user's current location and the coordinates of the target real scenic spot, and send the user's exercise progress bar to the navigation interface for display. Optionally, the user exercise progress bar is located below the real image. In some cases, the driving progress can be prompted on the navigation interface, so that the driver can reach the target spot spot more accurately, and determine the target location at the target spot spot based on the acquired real scene map. It is understandable that, in addition to displaying the exercise progress bar, the reminding method may also be a method such as voice broadcast of the progress, which is not limited in this embodiment.
  • the navigation interface includes the user's current location 151, the target location 152, the target real spot 153, the real image 154, the user exercise progress bar 155, and the navigation route 156.
  • the coordinates of the target real spot 153 are determined according to the user’s current position, the target position and the user’s moving direction, and the panoramic image corresponding to the target real spot 153 is obtained, according to the target real spot
  • the first vector determined by the coordinates and the target position, and the second vector corresponding to the user's movement direction determine the visual angle
  • the image of the panorama at the visual angle is sent to the navigation interface for display as a real-view map (that is, a real-world map 154), In this way, the user can determine the target location more accurately according to the observed real scene image 154, and the task processing efficiency is improved.
  • the distance between the user’s current position 151 and the target real-world spot 153 can be calculated, and the user exercise progress bar 155 representing the distance can be displayed on the navigation interface, so that the user can follow the user exercise progress bar 155 Come to observe the real picture 154 to avoid the vehicle driving past the real picture.
  • the reminding module 450 may provide other corresponding prompts according to the user's exercise progress. For example, when the exercise progress reaches a certain threshold (for example, 90%), the user may be prompted to perform operations such as deceleration in advance.
  • a certain threshold for example, 90%
  • Fig. 16 is another exemplary flowchart for providing a user with a real-world image according to some embodiments of the specification. As shown in FIG. 16, the process 1600 includes the following steps. In some embodiments, the process 1300 may be executed by a processing device (for example, the processing device 112).
  • Step S1 Acquire the current location and target location of the user.
  • Step S2 Calculate the distance between the current position of the user and the target position.
  • the distance between the user's current position and the target position is calculated according to the coordinates of the user's current position and the coordinates of the target position.
  • Step S3 It is judged whether the distance between the current position of the user and the target position is less than a first threshold.
  • Step S4 and/or step S14 are executed when the distance between the user's current position and the target position is less than the first threshold.
  • step S2 is executed.
  • the distance between the current position of the user and the target position is periodically calculated.
  • Step S4 Call a predetermined map service according to the API interface, and obtain at least one coordinate returned by the predetermined map service according to the target position.
  • a predetermined map service is invoked to obtain the coordinates of all the candidate real attractions within a radius with the target location as the center and the first threshold.
  • Step S5 Determine the coordinate closest to the target location from at least one coordinate returned by the predetermined map service.
  • the at least one coordinate returned by the predetermined map service is sorted according to the distance from the target location from near to far, a coordinate sequence is obtained, and the coordinate closest to the target location is determined from the coordinate sequence.
  • step S6 the coordinates are corrected according to the correction model or the correction algorithm, and the corresponding candidate real scenic spot coordinates are obtained.
  • a first vector is determined according to the coordinates of the candidate real scenic spot and the target position, and a second vector is determined according to the direction of movement of the user.
  • the vector starting point of the first vector is the candidate real spot
  • the vector starting point of the second vector is the user's current position or the candidate real spot.
  • Step S8 Calculate the angle between the first vector and the second vector.
  • the size of the included angle is determined by calculating the cosine of the included angle between the first vector and the second vector.
  • Step S9 It is judged whether the included angle between the first vector and the second vector is less than or equal to the angle threshold. When the included angle is less than or equal to the angle threshold, step S11 is executed, and when the included angle is greater than the angle threshold, step S10 and steps S6-step S9 are executed.
  • the angle threshold is 90°.
  • Step S10 Acquire the next coordinate that is closer to the target position. Optionally, obtain the next coordinate that is closer to the target position from the foregoing coordinate sequence.
  • the predetermined map service can be called Obtain the coordinates of a candidate real scenic spot when the candidate real scenic spot coordinates do not meet the predetermined conditions, call the predetermined map service again to obtain the next candidate real scenic spot coordinates that are closer to the target location, until the candidate real scenic spot coordinates meeting the predetermined conditions are obtained .
  • this embodiment corrects the coordinates of the candidate real scenic spots returned by the predetermined map service according to the correction model or the correction algorithm in turn, that is, first corrects the coordinates of the candidate real scenic spot closest to the target location, and then corrects the candidate real scenic spot closest to the target location. After the coordinates do not meet the conditions, the coordinates of the next candidate real scenic spot are corrected.
  • Step S11 Determine the candidate real scenic spot coordinates corresponding to the included angle less than or equal to the angle threshold as the target real scenic spot coordinates.
  • Step S12 Determine the real scene image according to the coordinates of the target real scenic spot and the corresponding included angle.
  • the panoramic image corresponding to the coordinates of the target real spot is obtained, the visual angle is determined according to the angle between the first vector and the second vector, and the image of the panoramic image at the visual angle is determined as the real image corresponding to the coordinates of the target point .
  • step S13 the real scene image is sent to the navigation interface for display.
  • the navigation interface may be the navigation interface of the driver's mobile terminal, or the navigation interface of other in-vehicle equipment, which is not limited in this embodiment.
  • the method for providing the user with a real scene map further includes: calculating the distance between the user's current position and the coordinates of the target real spot, determining the user's exercise progress bar, and sending the user's exercise progress bar to the navigation interface for display.
  • step S14 to step S16 are performed in response to the distance between the user's current location and the target location being less than the first threshold.
  • step S14-step S16 are used to determine the position relationship between the user and the target position in the user's current direction of movement according to the user's movement direction and the target position, and to display or broadcast the position relationship
  • step S4-step S13 are used to determine the target position
  • the surrounding real-time images and the real-time images are displayed on the navigation interface. It should be understood that, in some embodiments, the broadcast process of determining the position relationship between the user and the target location and the process of acquiring and displaying the real scene image may be performed at the same time or at different times, and this embodiment does not limit it.
  • Step S14 Determine the projection point of the target position on the corresponding route.
  • Step S15 Determine the positional relationship between the user and the target position according to the user's movement direction and the position of the projection point.
  • Step S16 controlling the terminal to display or broadcast the position relationship.
  • the method steps of the above-mentioned embodiments of the embodiments of the present invention can be embedded in the APP of the driver's mobile terminal or other in-vehicle devices, so as to use the driver's mobile terminal or Other in-vehicle devices execute the method steps of the foregoing embodiments to implement the embodiments of the present invention.
  • the method steps of the foregoing implementation manners of the embodiments of the present invention can also be stored in the corresponding server, so that the method steps of the foregoing implementation manners are executed by the processor of the server, and the obtained real-world image and/or position relationship are sent to the user (Or on-board equipment) to display or broadcast.
  • Some embodiments of this specification provide methods for providing users with real-world images, so as to improve the accuracy of target location recognition by displaying real-world images around the target location on the navigation interface, thereby improving task processing efficiency.
  • the embodiment of this specification also provides a computer-readable storage medium.
  • the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer implements the operations corresponding to the aforementioned method of displaying real scene images for the user.
  • a computer storage medium may contain a propagated data signal containing a computer program code, for example on a baseband or as part of a carrier wave.
  • the propagated signal may have multiple manifestations, including electromagnetic forms, optical forms, etc., or suitable combinations.
  • the computer storage medium may be any computer readable medium other than the computer readable storage medium, and the medium may be connected to an instruction execution system, device, or device to realize communication, propagation, or transmission of the program for use.
  • the program code located on the computer storage medium can be transmitted through any suitable medium, including radio, cable, fiber optic cable, RF, or similar medium, or any combination of the above medium.
  • the computer program codes required for the operation of each part of this manual can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional programming languages such as C language, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or as an independent software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing equipment.
  • the remote computer can be connected to the user's computer through any network form, such as a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS service Use software as a service
  • numbers describing the number of ingredients and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "approximately”, “approximately” or “substantially” in some examples. Retouch. Unless otherwise stated, “approximately”, “approximately” or “substantially” indicates that the number is allowed to vary by ⁇ 20%.
  • the numerical parameters used in the description and claims are approximate values, and the approximate values can be changed according to the required characteristics of individual embodiments. In some embodiments, the numerical parameter should consider the prescribed effective digits and adopt the method of general digit retention. Although the numerical ranges and parameters used to confirm the breadth of the ranges in some embodiments of this specification are approximate values, in specific embodiments, the setting of such numerical values is as accurate as possible within the feasible range.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

一种为用户提供实景图的方法,方法包括:基于目标位置(L1),获取至少一个候选实景点(A1,A2,A3)(510);基于目标位置(L1)、用户的运动方向和至少一个候选实景点(A1,A2,A3),判断预设条件是否被满足(520);响应于预设条件被满足,从预设条件被满足对应的候选实景点(A1,A2,A3)中确定目标实景点(530);基于目标实景点,确定目标位置(L1)对应的实景图;以及在用户相关的导航界面上显示实景图(540);通过在用户的导航界面上显示目标位置(L1)周围的实景图,提高目标位置(L1)识别的准确性,进而可以提高任务处理效率。

Description

一种为用户提供实景图的方法及系统
交叉引用
本申请要求2020年06月17日提交的中国申请号202010555616.0的优先权,全部内容通过引用并入本文。
技术领域
本发明涉及计算机技术领域,特别涉及一种为用户提供实景图的方法及系统。
背景技术
目前,用户通常按照规划好的导航路线到达目标位置,但是若用户处于定位信号弱、或者目的地路况复杂等情况下,可能会无法准确识别目标位置,从而可能在目标位置附近出现偏航问题,进而导致无法准确到达目标位置。如果能够为用户提供导航目的位置或中转地的实景图,则可以优化导航系统,更好的帮助用户到达目标位置。
为此,本说明书实施例提出一种为用户提供实景图的方法及系统,可以准确引导用户到达目标位置。
发明内容
本说明书实施例的一个方面提供一种为用户提供实景图的方法,包括:基于目标位置,获取至少一个候选实景点;基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足;响应于所述预设条件被满足,从所述预设条件被满足对应的候选实景点中确定目标实景点;基于所述目标实景点,确定所述目标位置对应的实景图;以及在所述用户相关的导航界面上显示所述实景图。
本说明书实施例的另一个方面提供一种为用户提供实景图的系统,包括:获取模块,用于基于目标位置,获取至少一个候选实景点;判断模块,用于基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足;确定模块,用于响应于所述预设条件被满足,从所述预设条件被满足对应的候选实景点中确定目标实景点;显示模块,用于基于所述目标实景点,确定所述目标位置对应的实景图;以及在所述用户相关的导航界面上显示所述实景图。
本说明书实施例的一个方面提供一种为用户提供实景图的装置,所述装 置包括处理器以及存储器,所述存储器用于存储指令,所述处理器用于执行所述指令,以实现如前述的为用户显示实景图的方法所对应的操作。
本说明书实施例的一个方面提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,实现如前述的为用户显示实景图的方法所对应的操作。
附图说明
本说明书将以示例性实施例的方式进一步描述,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的实景图提供系统的应用场景示意图;
图2是根据本说明书一些实施例所示的一种示例性计算设备的示意图;
图3是根据本说明书一些实施例所示的移动设备的示例性硬件和/或软件的示意图;
图4是根据本说明书一些实施例所示的为用户提供实景图的系统的模块图;
图5是根据本说明书一些实施例所示的为用户提供实景图方法的示例性流程图;
图6是根据本说明书一些实施例所示的确定预设条件的示意图;
图7是根据本说明书一些实施例所示的为用户提供实景图方法的另一示例性流程图;
图8是根据本说明书一些实施例所示的判断预设条件是否被满足的示例性流程图;
图9是根据本说明书一些实施例所示的确定目标实景点的示例图;
图10是根据本说明书一些实施例所示的显示实景图的示例性流程图;
图11是根据本说明书一些实施例所示的显示实景图的另一示例性流程图;
图12a和图12b是根据本说明书一些实施例所示的显示实景图的示意图;
图13是根据本说明书一些实施例所示的对用户进行位置关系提示的示 例性流程图;
图14是根据本说明书一些实施例所示的确定用户与目标位置关系的示例图;
图15是根据本说明书一些实施例所示的用户相关的导航界面的示意图;
图16是根据说明书一些实施例所示的为用户提供实景图的另一示例性流程图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本说明书中所使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
图1是根据本说明书一些实施例所示的实景图提供系统的应用场景示意图。
实景图提供系统100可以应用于地图服务系统、导航系统、运输系统、交通服务系统等。例如,实景图提供系统100可以应用于提供互联网服务的线上服务平台。在一些实施例中,实景图提供系统100可以应用于网约车服务,例如出租车呼叫、快车呼叫、专车呼叫、小巴呼叫、拼车、公交服务、司机雇佣和接 送服务等。在一些实施例中,实景图提供系统100还可以应用于代驾服务、快递、外卖等。
实景图提供系统100可以是一个线上服务平台,包括服务器110、网络120、终端130以及数据库140。该服务器110可以包含处理设备112。
在一些实施例中,服务器110可以用于处理与为用户提供实景图相关的信息和/或数据。服务器110可以是独立的服务器或者服务器组。该服务器组可以是集中式的或者分布式的(如:服务器110可以是分布系统)。在一些实施例中该服务器110可以是区域的或者远程的。例如,服务器110可通过网络120访问存储于终端130、数据库140中的信息和/或资料。在一些实施例中,服务器110可直接与终端130、数据库140连接以访问存储于其中的信息和/或资料。在一些实施例中,服务器110可在云平台上执行。例如,该云平台可包括私有云、公共云、混合云、社区云、分散式云、内部云等中的一种或其任意组合。
在一些实施例中,服务器110可包含处理设备112。该处理设备112可处理与为用户提供实景图相关的数据和/或信息以执行一个或多个本申请中描述的功能。例如,处理设备112可以基于目标位置,确定至少一个候选实景点。又例如,处理设备112可以基于用户当前位置与目标位置,判断触发条件是否被满足。又例如,处理设备112可以基于用户运动方向、目标位置和候选位置,确定预设条件是否被满足。又例如,处理设备112可以基于目标位置和候选位置,确定在导航界面上显示实景图的角度或/方向。在一些实施例中,处理设备112可包含一个或多个子处理设备(例如,单芯处理设备或多核多芯处理设备)。仅仅作为范例,处理设备112可包含中央处理器(CPU)、专用集成电路(ASIC)、专用指令处理器(ASIP)、图形处理器(GPU)、物理处理器(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编辑逻辑电路(PLD)、控制器、微控制器单元、精简指令集电脑(RISC)、微处理器等或以上任意组合。
网络120可促进数据和/或信息的交换。在一些实施例中,系统100中的一个或多个组件(例如,服务器110、终端130、数据库140)可通过网络120发送数据和/或信息给其他组件。在一些实施例中,网络120可以是任意类型的有线或无线网络。例如,网络120可包括缆线网络、有线网络、光纤网络、电信网络、内部网络、网际网络、区域网络(LAN)、广域网络(WAN)、无线区域网络(WLAN)、都会区域网络(MAN)、公共电话交换网络(PSTN)、蓝牙网 络、ZigBee网络、近场通讯(NFC)网络等或以上任意组合。在一些实施例中,网络120可以包括一个或多个网络进出点。例如,网络120可包含有线或无线网络进出点,如基站和/或网际网络交换点120-1、120-2、…,通过这些进出点,系统100的一个或多个组件可连接到网络120上以交换数据和/或信息。
在一些实施例中,终端130的用户可以是服务提供者。例如,服务提供者可以是网约车司机、外卖送餐员、快递员等等。在一些实施例中,终端130的用户也可以是服务使用者,例如,服务使用者可以包括地图服务使用者、导航服务使用者、运输服务使用者等。在一些实施例中,终端130可包括移动装置130-1、平板电脑130-2、笔记本电脑130-3、机动车内建装置(未示出)等中的一种或其任意组合。在一些实施例中,移动装置130-1可包括可穿戴装置、智能行动装置、虚拟实境装置、增强实境装置等或其任意组合。在一些实施例中,可穿戴装置可包括智能手环、智能鞋袜、智能眼镜、智能头盔、智能手表、智能衣物、智能背包、智能配饰等或其任意组合。在一些实施例中,智能行动装置可包括智能电话、个人数字助理(PDA)、游戏装置、导航装置、POS装置等或其任意组合。在一些实施例中,虚拟实境装置和/或增强实境装置可包括虚拟实境头盔、虚拟实境眼镜、虚拟实境眼罩、增强实境头盔、增强实境眼镜、增强实境眼罩等或以上任意组合。在一些实施例中,机动车内建装置可以包括车载导航仪、车载定位仪、行车记录仪等或其任意组合。在一些实施例中,终端130可包括具有定位功能的装置,以确定用户和/或终端130的位置。在一些实施例中,终端130可包括具有界面显示的装置,以为终端的用户显示实景图。在一些实施例中,终端130可包括具有输入功能的装置,以用户输入目标位置。
数据库140可存储资料和/或指令。在一些实施例中,数据库140可存储从终端130获取的资料。在一些实施例中,数据库140可存储供服务器110执行或使用的信息和/或指令,以执行本申请中描述的示例性方法。在一些实施例中,数据库140可以存储实景点(即实景点的位置信息(如,经纬度坐标))、实景点对应的实景图或实景图显示角度或方向或矫正算法等。在一些实施例中,数据库140可包括大容量存储器、可移动存储器、挥发性读写存储器(例如,随机存取存储器RAM)、只读存储器(ROM)等或以上任意组合。在一些实施例中,数据库140可在云平台上实现。例如,该云平台可包括私有云、公共云、混合云、社区云、分散式云、内部云等或以上任意组合。
在一些实施例中,数据库140可与网络120连接以与系统100的一个或多个组件(例如,服务器110、终端130等)通讯。系统100的一个或多个组件可通过网络120访问存储于数据库140中的资料或指令。例如,服务器110可以从数据库140中是实景点或实景点对应的实景图等并进行相应处理。在一些实施例中,数据库140可直接与系统100中的一个或多个组件(如,服务器110、终端130)连接或通讯。在一些实施例中,数据库140可以是服务器110的一部分。
图2是根据本说明书一些实施例所示的一种示例性计算设备的示意图。
在一些实施例中,服务器110和/或请求者终端130可以在计算设备200上实现。例如,处理设备112可以在计算设备200上实施并执行本申请所公开的处理设备112的功能。如图2所示,计算设备200可以包括总线210、处理器220、只读存储器230、随机存储器240、通信端口250、输入/输出接口260和硬盘270。
处理器220可以执行计算指令(程序代码)并执行本申请描述的实景图提供系统100的功能。所述计算指令可以包括程序、对象、组件、数据结构、过程、模块和功能(所述功能指本申请中描述的特定功能)。例如,处理器220可以处理从实景图提供系统100的其他任何组件获取的图像或文本数据。在一些实施例中,处理器220可以包括微控制器、微处理器、精简指令集计算机(RISC)、专用集成电路(ASIC)、应用特定指令集处理器(ASIP)、中央处理器(CPU)、图形处理单元(GPU)、物理处理单元(PPU)、微控制器单元、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、高级RISC机(ARM)、可编程逻辑器件以及能够执行一个或多个功能的任何电路和处理器等,或其任意组合。仅为了说明,图2中的计算设备200只描述了一个处理器,但需要注意的是,本申请中的计算设备200还可以包括多个处理器。
计算设备200的存储器(例如,只读存储器(ROM)230、随机存储器(RAM)240、硬盘270等)可以存储从实景图提供系统100的任何其他组件获取的数据/信息。示例性的ROM可以包括掩模ROM(MROM)、可编程ROM(PROM)、可擦除可编程ROM(PEROM)、电可擦除可编程ROM(EEPROM)、光盘ROM(CD-ROM)和数字通用盘ROM等。示例性的RAM可以包括动态RAM(DRAM)、双倍速率同步动态RAM(DDR SDRAM)、静态RAM(SRAM)、晶闸管RAM(T-RAM)和零电容(Z-RAM)等。
输入/输出接口260可以用于输入或输出信号、数据或信息。在一些实施例中,输入/输出接口260可以使用户与实景图提供系统100进行联系。在一些实施例中,输入/输出接口260可以包括输入装置和输出装置。示例性输入装置可以包括键盘、鼠标、触摸屏和麦克风等,或其任意组合。示例性输出装置可以包括显示设备、扬声器、打印机、投影仪等或其任意组合。示例性显示装置可以包括液晶显示器(LCD)、基于发光二极管(LED)的显示器、平板显示器、曲面显示器、电视设备、阴极射线管(CRT)等或其任意组合。通信端口250可以连接到网络以便数据通信。所述连接可以是有线连接、无线连接或两者的组合。有线连接可以包括电缆、光缆或电话线等或其任意组合。无线连接可以包括蓝牙、Wi-Fi、WiMax、WLAN、ZigBee、移动网络(例如,3G、4G或5G等)等或其任意组合。在一些实施例中,通信端口250可以是标准化端口,如RS232、RS485等。在一些实施例中,通信端口250可以是专门设计的端口。
图3是根据本说明书一些实施例所示的移动设备的示例性硬件和/或软件的示意图。
如图3所示,移动设备300可以包括通信单元310、显示单元320、图形处理器(GPU)330、中央处理器(CPU)340、输入/输出单元350、内存360、存储单元370等。在一些实施例中,操作系统361(例如,iOS、Android、Windows Phone等)和应用程序362可以从存储单元370加载到内存360中,以便由CPU340执行。应用程序362可以包括浏览器或用于从实景图提供系统100接收文字、图像、音频或其他相关信息的应用程序。
为了实现在本申请中描述的各种模块、单元及其功能,计算设备或移动设备可以用作本申请所描述的一个或多个组件的硬件平台。这些计算机或移动设备的硬件元件、操作系统和编程语言本质上是常规的,并且本领域技术人员熟悉这些技术后可将这些技术适应于本申请所描述的实景图提供系统。具有用户界面元件的计算机可以用于实现个人计算机(PC)或其他类型的工作站或终端设备,如果适当地编程,计算机也可以充当服务器。
图4是根据本说明书的一些实施例所示的为用户提供实景图的系统的模块图。如图4所示,为用户提供实景图的系统(如处理设备112)可以包括获取模块410、判断模块420、确定模块430、显示模块440和提醒模块450。
获取模块410,用于基于目标位置,获取至少一个候选实景点。在一些实 施例中,获取模块410还用于:基于所述目标位置,获取至少一个待矫正候选实景点;使用矫正算法对所述至少一个待矫正候选实景点进行矫正,得到所述至少一个候选实景点。
判断模块420,用于基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足。在一些实施例中,判断模块还用于:基于所述候选实景点和所述目标位置确定第一方向;根据所述第一方向和所述用户的所述运动方向的夹角,判断所述预设条件是否被满足。在一些实施例中,判断模块还用于:基于所述至少一个候选实景点与所述目标位置之间的距离大小顺序,基于所述运动方向、所述目标位置和所述候选实景点,判断所述预设要求是否被满足;当所述判断结果为满足时,停止判断;否则针对另外的候选实景点做出判断。在一些实施例中,判断模块还用于:基于所述用户的当前位置和所述目标位置,判断触发条件是否被满足;并且,只有在所述触发条件被满足时确定所述目标实景点。
确定模块430,用于响应于所述预设条件被满足,从所述预设条件被满足对应的候选实景点中确定目标实景点。
显示模块440,用于基于所述目标实景点,确定所述目标位置对应的实景图;以及在所述用户相关的导航界面上显示所述实景图。在一些实施例中,显示模块440还用于:基于所述目标位置和所述目标实景点,确定所述实景图的显示方向和/或夹角;基于所述显示方向和/或夹角,在所述导航界面上显示所述实景图。在一些实施例中,显示模块440还用于:基于所述用户的当前位置和所述目标位置,确定所述实景图的缩略或放大参数;基于所述缩略或放大参数,在所述导航界面上显示所述实景图。
提醒模块450用于:确定所述目标位置在所述目标实景点所在路线上的投影点;根据所述运动方向和所述投影点,确定所述用户与所述目标位置的位置关系;提醒所述用户所述位置关系。在一些实施例中,提醒模块450还用于:计算所述用户的当前位置与所述目标实景点的距离,确定所述用户的运动进度;提醒所述用户所述运动进度。
应当理解,图4所示的系统及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中, 由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本说明书的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于为用户提供实景图的系统及其模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。例如,图4中披露的获取模块410、判断模块420、确定模块430、显示模块440以及提醒模块450可以是一个系统中的不同模块,也可以是一个模块实现上述的两个模块的功能。又例如,为用户提供实景图的系统中各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。诸如此类的变形,均在本说明书的保护范围之内。
图5是根据本说明书一些实施例所示的为用户提供实景图方法的示例性流程图。如图5所示,流程500包括下述步骤。在一些实施例中,流程500可以由处理设备(例如,处理设备112)执行。
步骤510,基于目标位置,获取至少一个候选实景点。在一些实施例中,该步骤510可以由获取模块410执行。
目标位置可以表示用户想要达到的位置。例如,目标位置可以表示用户想要通过导航到达的终点。又例如,对于网约车场景而言,目标位置可以表示乘客的上车点(即,司机拾取乘客的位置)。用户可以是任意使用地图或导航的用户。例如,共享出行中,为乘客提供服务的司机等。
在一些实施例中,目标位置可以包括目标位置信息。在一些实施例中,位置信息可以包括但不限于名称信息和坐标信息。其中,坐标信息可以包括经纬度坐标信息,例如,GNSS(Global Navigation Satellite System,全球导航卫星系 统)坐标或GPS(Global Positioning System,全球定位系统)坐标等。本说明书中的一些实施例中基于目标位置进行处理,例如,确定与用户当前位置的距离、确定与用户运动方向和候选实景点之间关系等,实际是基于目标位置信息进行处理。在本说明书中,某些具体的名词可以包括与此名词相关的信息。例如,实景点、候选实景点、目标实景点或用户当前位置、实景图等。相应的,在一些实施例中,涉及具体名词的操作,实际是针对与其相关的信息进行操作,后续不再赘述。
在一些实施例中,目标位置可以通过用户在终端输入获取,也可以从存储设备(例如,数据库140等)中直接读取,也可以根据API接口调用对应的地图服务中获得。本实施例对目标位置获取的方式不做限制。
实景点是指拍摄现实环境(例如,城市、商圈或街道等环境)的实际拍摄地点。在一些实施例中,实景点为道路上的点。例如,在某条道路上,每隔一定距离(例如5米)确定一个实景点,并进行实景拍摄。将基于实景点拍摄的图像或者对拍摄的图像进行处理后得到图像称为实景图。一般情况下,基于实景点拍摄实景图时,是以实景点向各个方向上采集图像,即,实景图可以是360°全景图像,其也可称为全景图。例如,实景图(或全景图)以实景点为中心由轴向和纵向绕其一周进行拍摄。又例如,实景图(或全景图)可以预先通过全景采集车在路网上不断采集处理后获得。又例如,实景图(或全景图)采用全景摄像机在对应的实景点采集的图像等。
实景点及其对应的实景图可以从预定地图服务中获取,例如,通过API接口调用预定地图服务获取等。实景点及其对应的实景图还可以直接从存储设备(例如,数据库140等)中获取。例如,在终端或服务器中嵌入多个实景点坐标与其对应的全景图,即可从终端或服务器中直接读取。获取实景点及其对应的实景图还可以通过其他方式,本实施例不做限制。
在一些实施例中,基于目标位置,可以获取与目标位置相关的至少一个候选实景点,与目标位置相关可以是与目标位置之间的关系满足预设要求。其中,预设要求包括但不限于:候选实景点与目标位置之间的距离小于阈值(例如,5米或10米等)或/和候选实景点与目标位置之间不存在遮挡物(如,建筑或施工场地等)等。在一些实施例中,在确定候选实景点时还可以考虑用户当前的信息,可以理解,候选实景点可以是与目标位置、用户当前位置相关的实景点。
考虑到基于定位技术(例如,GPS或GNSS等定位系统)获取的位置信息(例如,坐标信息)进行地图匹配(将位置信息匹配到路网上)时可能会有误差,应该在路网上的位置可能不显示在路网上(例如,路旁边的房屋里或池塘里等),或者,通过API接口调用预定地图服务,获取该预定地图服务根据目标位置返回的候选实景点坐标,由于地图服务返回的坐标有时候可能不准确,返回的坐标无法关联到地图的路网上。若地图匹配时出现误差,在一些情况下,在后续确定实景图时可能不准确,可能会导致向用户传递错误的信息,影响用户体验。
在一些实施例中,获取的候选实景点可以是经过矫正后得到的实景点。具体的,基于目标位置,获取至少一个待矫正的候选实景点;使用矫正算法对至少一个待矫正的候选实景点进行矫正,得到至少一个候选实景点。待矫正的候选实景点是指基于定位技术获取的实景点中,与目标位置之间的关系满足上述预设要求的实景点。矫正算法可以是指将待矫正的候选实景点关联到路网上的算法。例如,采用矫正算法可以将地图服务返回的GNSS或GPS坐标关联到地图的路网上,也即将GNSS或GPS下采样的坐标序列转换为路网坐标序列以对地图服务返回的坐标进行矫正。在一些实施例中,矫正算法可以是将待矫正的候选实景点关联到最近的路线上的算法,例如,将待矫正的候选实景点直接投影到最近的路线上,将投影点作为矫正后的实景点(即,候选实景点)。路线是地图中的路网的组成部分,路线可以视为地图中的道路。在一些实施例中,矫正算法可以是隐马尔科夫模型(Hidden Markov Model,HMM)、ST-Matching算法或IVVM((An Interactive-Voting Based Map Matching Algorithm)算法等。
在一些实施例中,步骤310之前还可以包括:基于用户当前位置和目标位置,判断触发条件是否被满足的步骤,并且,在满足触发条件的情况下才执行上述步骤310。
用户当前位置可以通过用户终端获取。可以理解的,用户当前位置也可以称为终端当前位置。例如,终端可以为司机移动终端,也可以为其他车载设备,终端当前位置可以根据终端的定位装置或者车辆上的定位装置确定,也可以根据API接口调用对应的地图服务中获得,本实施例不做限制。
在一些实施例中,判断触发条件是否被满足的步骤可以包括:判断用户的当前位置和目标位置之间的距离是否小于或等于第一阈值(例如50米)。在一些实施例中,该距离可以是路线距离(即,地图中路网中路线的长度),该距 离也可以是直线距离。本实施例不做限制。第一阈值可以根据实际应用场景进行设置,本实施例不做限制。
具体地,若判断结果为是,则开始执行上述步骤310及其后续步骤,确定与目标位置对应的实景点;若判断结果为否,则继续基于当前位置和目标位置之间的距离判断是否满足触发条件。
在导航过程中,在距离目标位置较近时(例如小于50m),通常会进入轻导航阶段,也即调整地图与实际距离比例,并控制导航界面上显示用户(例如,司机)当前位置、终点位置、导航路线、预计到达时间等信息。在一些实施例中,可以将第一阈值设置为切换轻导航时用户当前位置与目标位置之间的距离,也即,响应于导航界面进入轻导航阶段,根据用户运动方向确定目标位置对应的目标实景点坐标。
可以理解,当用户距离目标位置较远时,因为用户当前无法看到目标位置,提供目标位置对应的实景图的意义有限,在一些情况下,展示实景图反而可能误导用户或者干扰用户。通过设置上述触发条件,可以减少对用户的干扰的同时,还可以在合适时机提醒用户即将到达目的地。
步骤520,基于目标位置、用户的运动方向和至少一个候选实景点,判断预设条件是否被满足。在一些实施例中,该步骤520可以由判断模块420执行。
预设条件可以是指当用户到达候选实景点时,用户(包括,身体或头)转动正负90°之间的角度(其中,以用户的运动方向为0°),即可观察到目标位置对应的条件。在一些实施例中,预设条件还可以是候选实景点位于目标实景点的后方(包括,正后方或斜后方),且位于用户当前位置的前方。
在一些实施例中,预设条件可以是:候选实景点位于目标路线段中,其中,目标路线段可以基于目标位置在用户所在路线上的投影点、用户的当前位置确定。目标路线段指用户当前位置与目标位置之间的路线。其中,投影可以是过目标位置向路线做垂线,垂线与路线的交点为投影点。在一些实施例中,目标路线段还可以基于目标位置在用户所在路线上的投影点、用户的当前位置确定和运动方向确定。目标路线段指用户当前位置与目标位置之间,且在运动方向前方的路线。
示例的,如图6,L1表示目标位置,L2表示用户当前位置,箭头c表示用户的运动方向,A1、A2、A3表示候选实景点,P表示目标位置L1在用户所 在路线上的投影点。目标路线段可以表示用户当前位置L2与投影点P之间的路段。可以看出,候选实景点A1位于该目标路线中,预设条件被满足,A2、A3不位于该目标路线中,预设条件未被满足。
在一些实施例中,判断预设条件是否被满足的方式也可以包括:基于候选实景点和目标位置确定的第一方向,与用户的运动方向之间的夹角判断,例如,判断该夹角与阈值角度之间的关系是否符合条件。关于该判断方式的更多细节可以参见图8及其相关描述。
容易理解,在一些情况下,当候选实景点与目标位置,以及与用户的运动方向不满足预设条件时,当用户运动至候选实景点时,可能已驶过目标位置,此时用户结合实景图寻找目标位置需要往左后方或右后方观看,极不利于安全驾驶。而且,因为已驶过目标位置,用户为了达到目标位置,可能需要掉头,影响用户体验。
在一些实施例中,判断预设条件是否被满足的方式也可以包括:基于至少一个候选实景点与目标位置之间的距离大小顺序,基于用户的运动方向、目标位置和候选实景点,判断预设要求是否被满足,当判断结果为满足时,停止判断,否则针对另外的候选实景点做出判断。例如,距离大小顺序可以是由小到大的顺序。
应当理解,通过按照候选实景点与目标位置之间的距离由小到大地判断,并在某一候选实景点满足条件时停止判断的方式,可以确定一个与用户运动方向和目标位置之间的关系满足预设条件的候选实景点(可以称为“预设条件被满足对应的候选实景点”),进一步地可以将其确定为目标实景点。在一些情况下,该方式的判断过程可以避免一些非必要的判断过程,有助于加快目标实景点的确定速度。
步骤530,响应于预设条件被满足,从预设条件被满足对应的候选实景点中确定目标实景点。在一些实施例中,该步骤530可以由确定模块430执行。
至少一个候选实景点中可能有一个也可能有多个候选实景点与用户运动方向和目标位置之间的关系满足预设要求,即,预设条件被满足对应的候选实景点可以有一个或多个。在一些实施例中,当预设条件被满足对应的候选实景点只有一个时,可以直接将其作为目标实景点。在一些实施例中,当预设条件被满足对应的候选实景点有多个时,可以进一步从其中筛选出目标实景点。例如,可以 从预设条件被满足对应的候选实景点随机选择一个作为目标实景点。又例如,可以基于目标位置与候选实景点之间的距离,对预设条件被满足对应的候选实景点进行排序,并从中选择距离最近的作为目标实景点。又例如,设定一个第一方向和用户运动方向的夹角,与目标位置与候选实景点的距离之间的对应规则,该规则可以使用户以最方便的方式观察目标位置,进一步的,基于该对应规则,从预设条件被满足对应的候选实景点中,选择最符合该对应规则的作为目标实景点。
可以理解的,候选实景点距离目标位置越近,其全景图越接近目标位置的真实环境,在一些情况下,有助于帮助用户基于实景图寻找目标位置。特别的,对于共享出行领域,为了方便司乘碰面,乘客和司机均显示目标实景点的实景图,将预设条件被满足对应的候选实景点中,与目标位置距离最近的候选实景点确定为目标实景点,尽可能方便乘客确认上车点的同时,实现司机的安全驾驶。
步骤540,基于目标实景点,确定目标位置对应的实景图;以及在用户相关的导航界面上显示实景图。在一些实施例中,该步骤540可以由显示模块440执行。
如前所述,实景图是基于实景点拍摄得到,即,实景点存在对应的实景图。在一些实施例中,目标实景点被确定的基础上,显示模块440可以获取目标实景点对应的实景图(具体参见步骤510)。例如,根据API接口调用预定地图服务,获取该预定地图服务根据目标实景点坐标返回的实景图(或全景图)。返回的实景图(或全景图)可以是以目标实景点为采集点向各个方向采集的图像或全景图像。进一步,显示模块440基于目标实景点的实景图获取目标位置对应的实景图。
在一些实施例中,显示模块440可以直接将目标实景点的实景图作为目标位置对应的实景图。在一些实施例中,显示模块440在获取了目标实景点的实景图之后,可以对该实景图进行处理,并将处理后的实景图作为目标位置对应的实景图。其中,处理包括但不限于:放大、缩小、调节分辨率、调节饱和度、调节亮度或裁剪等图像处理手段。
在一些实施例中,显示模块440可以将目标实景点的实景图或对目标实景点的实景图处理后得到的实景图在特定视角角度下的部分图像作为目标位置对应的实景图。在一些实施例中,显示模块440可以根据目标位置、目标实景点和用户的运动方向确定特定视觉角度,并将目标实景点的实景图在该特定视角角 度下的图像作为目标位置对应的实景图。例如,将第一向量和第二向量的夹角确定特定视觉角度,其中,第一向量由候选实景点和目标位置确定,第二向量由用户的运动方向确定。本实施例根据上述第一向量和第二向量的夹角确定对应的视觉角度,以便司机能够基于该视觉角度下的实景图较为准确地观测到目标位置周围的街景。
在一些实施例中,在导航界面上显示的实景图可以是目标实景点的全部景象或部分景象,即展示的是候选实景点的实景图的全部或部分内容。在展示目标位置对应的实景图之前,可以对目标位置对应的实景图进行预处理,例如视角调整、调整分辨率、调整亮度、调整饱和度、缩略或/和放大等。关于视角调整的具体内容可以参见本说明书图10部分及其相关内容,关于缩放或放大的具体内容可以参见本说明书图11部分及其相关内容。
在一些实施例中,向用户展示该实景图的过程中还可以接收用户的操作指令,并根据用户的操作指令对展示给用户的实景图进行调整,例如,更换实景图的指令、实景图显示视角的调整(例如,旋转等)指令、视距的调整指令、图像大小调整指令、图像分辨率调整指令等图像处理相关指令,从而根据用户需求为其提供相应的实景图。
随着用户的运动,若运动方向发生改变,在一些实施例中,可以实时或者预定时间间隔更新用户的运动方向,进一步地,通过前述方式重新判断预设条件是否被满足,更进一步地,调整目标实景点以及目标位置对应的实景图等。可以理解的,导航界面显示的实景图可以跟着(例如,实时跟着)用户的运动进行变化。
通过将目标位置对应的实景图通过导航界面展示给用户,辅助用户(例如,司机)寻找目标位置。
图7是根据本申请一些实施例所示的为用户提供实景图的方法的另一流程图。如图7所示,流程700包括下述步骤。在一些实施例中,流程700可以由处理设备(例如,处理设备112)执行。
步骤710,获取用户当前位置和目标位置。在一些实施例中,该步骤710可以由获取模块410执行。
用户当前位置和目标位置的获取,参见步骤510及其相关描述。
步骤720,响应于用户当前位置与目标位置之间的关系满足触发条件, 根据用户运动方向确定目标位置对应的目标实景点坐标。在一些实施例中,该步骤720可以由确定模块430执行。
触发条件的更多细节参见图步骤510,此处不再赘述。
在一些实施例中,步骤720具体可以包括:响应于用户当前位置与目标位置之间的关系满足触发条件,获取目标位置周围的候选实景点坐标,响应于候选实景点坐标、目标位置和用户运动方向之间的关系满足预定条件,将该候选实景点坐标确定为目标位置对应的目标实景点坐标。例如,响应于用户当前位置与目标位置之间的关系满足触发条件,根据候选实景点与目标位置之间的距离依次获取目标位置周围的候选实景点坐标,直至获取的候选实景点坐标与目标位置和用户运动方向之间的关系满足预定条件。关于预设条件的更多细节可以参见步骤520。
在一些实施例中,获取目标位置周围的候选实景点坐标包括:获取预定地图服务根据目标位置返回的坐标,根据预定的矫正模型对返回的坐标进行矫正,以获取目标位置周围的候选实景点坐标。关于矫正模型的更多细节参见步骤510及其相关描述。
在一些实施例中,可以通过用户运动方向、目标位置和目标实景点坐标确定目标位置位于终端的左侧或右侧,有助于提高司机根据目标实景点坐标对应的实景图发现目标位置的效率。具体参见图13及其相关描述。在一些实施例中,在判断获取的候选实景点坐标与目标位置和用户运动方向之间的关系是否满足预定条件之前,根据终端当前位置、用户运动方向和候选实景点坐标确定该候选实景点是否在用户的前方,若该候选实景点在用户的前方,则判断预定条件是否被满足,若该候选实景点在用户的后方,则可以判断车辆已经驶过目标位置,可以使得用户的导航向用户发出驶过目标位置的提醒。
在一些实施例中,响应于用户当前位置满足触发条件,获取距离目标位置最近的候选实景点坐标,根据用户当前位置、用户运动方向和该候选实景点坐标判断该候选实景点是否位于用户运动方向的前方,在候选实景点位于终端运动方向的前方时,则判断预定条件是否被满足。
步骤730,确定目标实景点坐标对应的实景图。在一些实施例中,该步骤730可以由显示模块440执行。
确定目标实景点坐标对应的实景图具体细节参见步骤540及其相关描述。
步骤740,将实景图发送至导航界面进行显示。在一些实施例中,该步骤740可以由显示模块440执行。
在一些实施例中,导航界面可以是用户移动终端的导航界面,也可以为其他车载设备的导航界面,可以全屏或也可以部分显示,本实施例对此不做限制。关于在导航界面上显示实景图的更多细节参见图10和图11。
图8是根据本说明书一些实施例所示的判断预设条件是否被满足的示例性流程图。如图8所示,流程800包括下述步骤。在一些实施例中,流程800可以由处理设备(例如,处理设备112)中的判断模块420执行。
步骤810,基于候选实景点和目标位置确定第一方向。
第一方向的起点可以是候选实景点或目标位置。
步骤820,根据第一方向和用户的运动方向的夹角,判断预设条件是否被满足。
当根据第一方向和用户的运动方向的夹角,判断预设条件是否被满足时,预设条件与该夹角相关。预设条件可以根据第一方向起点对应设置。例如,当第一方向的起点为候选实景点,预设条件为第一方向和用户的运动方向的夹角小于或等于角度阈值,可选的,角度阈值等于或小于90°(如,90°、60°或30°等)。又例如,当第一方向的起点为目标位置,预设条件为第一方向和用户的运动方向的夹角大于或等于角度阈值,可选的,角度阈值等于或大于90°(如,90°、100°或130°等)。
在一些实施例中,预设条件可以根据第一向量和第二向量的向量起点或向量终点进行确定。其中,第一向量由候选实景点和目标位置确定,第二向量由用户运动方向确定。应理解的,本实施例并不对第一向量和第二向量的向量起点进行限制,角度阈值可以根据第一向量和第二向量的向量起点或向量终点进行确定。用户的运动方向可以为用户在导航路线中的各路段上的运动方向。预定条件为第一向量和第二向量的夹角小于或等于角度阈值。例如,第一向量的向量起点为候选实景点坐标,第二向量的向量起点为用户当前位置,可选的,角度阈值为90°。
如图9所示,用户当前位置为L2、目标位置为L1,在用户当前位置L2和目标位置L1之间的距离小于第一阈值时,获取与目标位置L1最近的候选实景点A1的坐标。在本实施例中,假设目标位置L1周围(例如以目标位置L1为 中心的第一阈值范围内)存在候选实景点A1、A2和A3,其中,候选实景点A1、A2和A3与目标位置L1之间的距离由近及远依次为候选实景点A1、候选实景点A2、候选实景点A3。
根据用户当前位置L2、用户运动方向c和候选实景点A1的坐标可以确定候选实景点A1位于终端运动方向的前方。然后,根据候选实景点A1的坐标和目标位置L1确定第一向量a1,根据用户运动方向c确定第二向量b,计算第一向量a1和第二向量b的夹角α,夹角α大于90°,当用户(例如,司机)运动至候选实景点A1时需要向斜后方观测才能发现目标位置L1及其周围的街景,这对安全性会带来影响。因此,候选实景点A1不能作为目标实景点。之后获取与目标位置L1较近的候选实景点A2的坐标,根据用户当前位置L1、用户运动方向和候选实景点A2的坐标可以确定候选实景点A2位于用户运动方向的前方。然后,根据候选实景点A2的坐标和目标位置L1确定第一向量a2,根据用户运动方向确定第二向量b,计算第一向量a2和第二向量b的夹角θ,夹角θ小于90°,由此,当用户动至候选实景点A2时向前方观测便能够发现目标位置L2及其周围的街景,因此,可以将候选实景点A2确定为目标实景点。
图10是根据本说明书一些实施例所示的显示实景图的示例性流程图。如图10所示,流程1000包括下述步骤。在一些实施例中,流程1000可以由处理设备(例如,处理设备112)中的显示模块440执行。
步骤1010,基于目标位置和目标实景点,确定实景图的显示方向和/或夹角。
实景图显示包含两个角度:水平视角和俯仰角。其中,水平视角可以是以水平地面为基础,建立的二维平面中,实景图像显示的角度。例如,建立的二维坐标系中,正北方向为0°,正南方向为180°。水平视角也可称为水平显示方向。俯仰角是指与水平地面的夹角。
用户在视觉范围内(特别的,司机在行驶车辆时)通常只能观察到水平0~180°范围内的景象。目标实景点的实景图可以为360°全景图像,若将目标实景点的360°全景图像确定为目标位置对应的实景图,可以通过显示方向和/或角度显示目标位置对应的实景图,便于用户结合显示的实景图,通过观察实景的街景或环境,确认目标位置。
在一些实施例中,以目标实景点为起点、目标位置为终点确定的方向, 作为目标位置对应的实景图在导航界面上显示的水平方向。例如,将实景图的初始方向旋转至该确定的水平方向,显示该实景图。其中,初始方向是指地图服务对实景图进行展示默认或设定的初始方向(例如,正北方向),初始方向可以根据具体应用场景设置,本实施例对此不做限制。
在一些实施例中,以目标实景点为起点、目标位置为终点确定的方向,与实景图初始方向之间的夹角,作为目标位置对应的实景图在导航界面上显示的水平视角。例如,将实景图的初始方向旋转该夹角的角度,显示该实景图。
在一些实施例中,俯仰角或竖直方向可以是默认值,例如,0°等,或者基于用户选择确定。
步骤1020,基于显示方向和/或夹角,在导航界面上显示实景图。
在一些实施例中,在显示目标位置对应的实景图之前,可以对该实景图进行处理。其中,处理可以包括:缩放、裁剪、调整分辨率、调整亮度和调整饱和度中的一种或多种的组合。
在一些实施例中,在基于显示角度和/或方向,在导航界面上显示实景图时,显示模块440可以根据用户当前位置与目标位置之间的相对位置变化而实时调整角度或方向。可以理解的,在调整过程中,保证目标位置在显示的内容中。在一些实施例中,显示模块440可以根据用户的反馈指令或操作实时调整角度或方向,例如,用户对展示实景图进行旋转、移动等操作,或者直接发出角度或方向调整指令等。
图11是根据本说明书一些实施例所示的显示实景图的另一示例性流程图。如图11所示,流程1100包括下述步骤。在一些实施例中,流程1100可以由处理设备(例如,处理设备112)中的显示模块440执行。
步骤1110,基于用户的当前位置和目标位置,确定实景图的缩略或放大参数。
在一些实施例中,显示模块440可以基于用户的当前位置和目标位置之间的距离,确定实景图的缩略或放大参数。例如,缩略的参数小大与距离大小成正比。又例如,放大的参数大小与距离大小成反比。可以理解的,缩略或放大参数是相对于原图大小而言的缩小参数或放大参数,参数可以是倍数、比例等。在一些实施例中,可以基于缩放算法,确定特定距离下的缩略或放大参数。
在一些实施例中,显示模块440可以基于用户的当前位置和目标实景点 之间的距离,确定实景图的缩略或放大参数。具体方式与基于用户的当前位置和目标位置之间的距离,确定实景图的缩略或放大参数类似。
考虑用户在不同阶段,对导航的不同需求,对在页面显示的实景图进行缩放。在一些情况下,用户距离目标位置越远,用户在导航页面上的查看路线、路况等信息的需求可能更大,此时显示的实景图的大小可以在保证能够清楚显示的前提下进行缩略处理。用户距离目标位置越近,用户使用实景图的需求更大,可以将展示的实景图进行放大。
步骤1120,基于缩略或放大参数,在导航界面上显示实景图。
在一些实施例中,在确定缩略或放大参数之后,可以在导航界面对目标位置对应的实景图进行显示。例如,显示模块440可以在显示的过程中,实时对目标位置对应的实景图进行缩放,然后将缩放之后的实景图在导航界面上显示,或者,将缩放之后的实景图以一定的水平视角和俯仰角在导航界面上显示。又例如,显示模块440可以在存储设备(例如,数据库140等)中存储不同距离下对应缩略或放大参数的实景图,在显示的过程中,根据距离直接读取对应的实景图进行显示。
在一些实施例中,实景图的大小还可以根据用户的操作指令进行调整,从而根据用户需求来确定向用户展示的实景图的大小,确保用户的使用体验。
示例的,如图12a和图12b所示,用户当前位置为L2,目标位置为L1。如图12a所示,当用户当前位置L2距离目标位置L1较远时,可以对目标位置对应的实景图进行缩略,然后向用户展示缩略图像,从而便于用户查看地图信息。反之,如图12b所所示,当用户当前位置L2距离目标位置L1较近时,可以对实景图进行放大,并向用户展示放大的实景图,从而便于用户确认目标位置。
在一些实施例中,显示模块还可以根据缩略或放大参数确定相应的图像处理手段,以保证在导航界面上显示的图像清晰。例如,若放大倍数大于阈值(例如,1倍),对图像进行锐化处理,使图像的轮廓清晰。
图13是根据本说明书一些实施例所示的对用户进行位置关系提示的示例性流程图。如图13所示,流程1300包括下述步骤。在一些实施例中,流程1300可以由处理设备(例如,处理设备112)中的提醒模块450执行。
步骤1310,确定目标位置在目标实景点所在路线上的投影点。
确定目标位置在对应的路线上的投影点。在一些实施例中,对应路线可 以是与目标位置相关的路线。例如,路网中与目标位置最近的路线。又例如,目标位置可以是目标实景点所在的路线。在一些实施例中,对应路线可以是用户所在的路线。
在一些实施例中,过目标位置向对应的路线做垂线,确定垂点为目标位置在该路线上的投影点。例如,响应于终端当前位置与目标位置之间的距离小于第一阈值,确定目标位置在对应的路线上的投影点。
步骤1320,根据运动方向和投影点,确定用户与目标位置的位置关系。
在一些实施例中,基于运动方向(即,运动方向作为0°),顺时针旋转0-180°的角度视为正角度,逆时针旋转0-180°的角度视为负角度,进一步地,可以通过判断第二方向与运动方向的夹角的正负,确定目标位置与用户的左右位置关系。其中,第二方向可以由目标位置与投影点,或目标位置与用户当前位置等确定。例如,第二方向基于投影点为起点,目标位置为终点确定的方向,则第二方向与运动方向的夹角为正时,目标位置在右侧,反之,在左侧。
需要说明的是,在一些情况下,目标位置可能刚好位于用户当前行驶的路线上。在一些实施例中,当第一方向和运动方向的夹角为0°,可以代表目标位置在道路的中间。可以理解的,为了保证驾驶或交通安全,可以根据当地的交通规则,确定目标位置在用户的左侧还是右侧。例如,对于中国,交通规则是靠右停车以及靠右行驶,则确定的位置关系为目标位置在用户的右侧。
步骤1330,提醒用户位置关系。
在一些实施例中,提醒用户位置关系可以包括:控制终端显示、播报(例如,语音播报)位置关系或发送信息等。
示例的,如图14所示,目标位置为L1、用户当前位置为L2、用户运动方向为c,过目标位置L1向对应的路线做垂线,获得投影点P。根据目标位置为L1、用户当前位置为L2、用户运动方向为c可以确定基于当前的用户运动方向c,目标位置L1位于用户的右侧(即车辆的右侧),或者,根据投影点P为起点目标位置L1为终点确定的第一方向与用户运动方向c的夹角γ为90°,则目标位置L1位于用户的右侧。并将目标位置L1位于用户的右侧的位置关系进行语音播报以提醒用户观测右侧街景来确认目标位置。
在一些实施例中,可以在用户的导航界面显示该位置关系,或者通过语音播报该位置关系,以便于司机能够准确到达目标位置。
在一些实施例中,提醒模块450可以计算用户的当前位置与目标实景点的距离,确定用户的运动进度,并提醒用户该运动进度。例如,提醒模块450可以根据计算用户当前位置与目标实景点坐标的距离,确定用户运动进度条,将该用户运动进度条发送至导航界面进行显示。可选的,用户运动进度条位于实景图的下方。在一些情况下,可以在导航界面对行驶进度进行提示,以便司机可以较为准确地到达目标实景点,并在该目标实景点基于获取的实景图确定目标位置。可以理解的,提醒方式除了显示运动进度条以外,还可以是语音播报进度等方式,本实施例不做限制。
示例的,如图15所示,在导航界面中包括用户当前位置151、目标位置152、目标实景点153、实景图154、用户运动进度条155以及导航路线156。响应于用户当前位置与目标位置之间的距离小于第一阈值,根据用户当前位置、目标位置以及用户运动方向确定目标实景点153的坐标,并获取目标实景点153对应全景图,根据目标实景点坐标和目标位置确定的第一向量、以及用户运动方向对应的第二向量确定视觉角度,将全景图在该视觉角度下的图像作为实景图(也即实景图154)发送至导航界面进行显示,以使得用户可以根据观测实景图154较为准确地确定目标位置,提高任务处理效率。同时,在获取对应的实景图154后,可以计算用户当前位置151与目标实景点153的距离,并在导航界面显示表征该距离的用户运动进度条155,以使得用户可以根据用户运动进度条155来观测实景图154,避免车辆行驶过实景图。
在一些实施例中,提醒模块450可以根据用户的运动进度进行其他相应的提示。例如,当运动进度达到一定阈值(例如90%)时,可以提示用户提前进行减速等操作。
图16是根据说明书一些实施例所示的为用户提供实景图的另一示例性流程图。如图16所示,流程1600包括下述步骤。在一些实施例中,流程1300可以由处理设备(例如,处理设备112)执行。
步骤S1,获取用户当前位置和目标位置。
步骤S2,计算用户当前位置和目标位置之间的距离。可选的,根据用户当前位置坐标和目标位置的坐标计算用户当前位置和目标位置之间的距离。
步骤S3,判断用户当前位置和目标位置之间的距离是否小于第一阈值。在用户当前位置和目标位置之间的距离小于第一阈值时执行步骤S4和/或步骤 S14。在用户当前位置和目标位置之间的距离不小于第一阈值时,执行步骤S2。可选的,在用户当前位置和目标位置之间的距离不小于第一阈值之前,周期性地计算用户当前位置和目标位置之间的距离。
步骤S4,根据API接口调用预定地图服务,获取该预定地图服务根据目标位置返回的至少一个坐标。在一些实施例中,调用预定地图服务获取以目标位置为中心,以第一阈值为半径的范围内的所有候选实景点坐标。
步骤S5,从预定地图服务返回的至少一个坐标中确定距离目标位置最近的坐标。可选的,对预定地图服务返回的至少一个坐标按照距离目标位置的距离由近及远进行排序,获取坐标序列,从坐标序列中确定距离目标位置最近的坐标。
步骤S6,根据矫正模型或矫正算法对该坐标进行矫正,获取对应的候选实景点坐标。
步骤S7,根据候选实景点坐标和目标位置确定第一向量,根据用户运动方向确定第二向量。可选的,第一向量的向量起点为候选实景点,第二向量的向量起点为用户当前位置或候选实景点。
步骤S8,计算第一向量和第二向量的夹角。可选的,通过计算第一向量和第二向量的夹角余弦值确定夹角大小。
步骤S9,判断第一向量和第二向量的夹角是否小于或等于角度阈值。在夹角小于或等于角度阈值时执行步骤S11,在夹角大于角度阈值时执行步骤S10以及步骤S6-步骤S9。可选的,角度阈值为90°。
步骤S10,获取下一个距离目标位置较近的坐标。可选的,从上述坐标序列中获取下一个距离目标位置较近的坐标。
应理解,本实施例以从预定地图服务中获取以目标位置为中心,第一阈值范围内的所有候选实景点坐标为例进行描述,在其他可选的实现方式中,可以在调用预定地图服务时获取一个候选实景点坐标,在该候选实景点坐标不满足预定条件时,再次调用预定地图服务获取下一个距离目标位置较近的一个候选实景点坐标,直至获取满足预定条件的候选实景点坐标。同时,本实施例根据矫正模型或矫正算法对预定地图服务返回的候选实景点坐标依次进行矫正,也即先对距离目标位置最近的候选实景点坐标进行矫正,在距离目标位置最近的候选实景点坐标不满足条件后,再对下一个候选实景点坐标进行矫正。在其他可选的实现方式中,也可以根据矫正模型或矫正算法对预定地图服务返回的所有候选实景点坐 标进行同时矫正,再对迭代执行步骤S7-步骤S9,本实施例并不对获取目标实景点坐标的步骤迭代过程进行限制。
步骤S11,将小于或等于角度阈值的夹角对应的候选实景点坐标确定为目标实景点坐标。
步骤S12,根据目标实景点坐标和对应的夹角确定实景图。在一些实施例中,获取目标实景点坐标对应的全景图,根据第一向量和第二向量的夹角确定视觉角度,将全景图在该视觉角度下的图像确定为目标点坐标对应的实景图。
步骤S13,将实景图发送至导航界面进行显示。例如,导航界面可以司机移动终端的导航界面,也可以为其他车载设备的导航界面,本实施例并不对此进行限制。
在一些实施例中,为用户提供实景图的方法还包括:计算用户当前位置与目标实景点坐标的距离,确定用户运动进度条,将该用户运动进度条发送至导航界面进行显示。在一些实施例中,响应于用户当前位置和目标位置之间的距离小于第一阈值,执行步骤S14-步骤S16。其中,步骤S14-步骤S16用于根据用户运动方向和目标位置确定在用户当前的运动方向上用户与目标位置的位置关系,并显示或播报该位置关系,步骤S4-步骤S13用于确定目标位置周围的实景图并将实景图在导航界面进行显示。应理解,在一些实施例中,用户与目标位置的位置关系的确定播报过程和实景图的获取显示过程可以同时执行,也可以不同时执行,本实施例不做限制。
步骤S14,确定目标位置在对应的路线上的投影点。
步骤S15,根据用户运动方向和投影点的位置确定用户与目标位置的位置关系。
步骤S16,控制终端显示或播报位置关系。
应理解,本说明中的一些实施例并不对执行方法的用户进行限制,本发明实施例的上述各实施方式的方法步骤可以嵌入司机移动终端或其他车载设备的APP中,以通过司机移动终端或其他车载设备执行上述各实施方式的方法步骤来实现本发明实施例。本发明实施例的上述各实施方式的方法步骤也可以存储至对应的服务器中,以通过服务器的处理器执行上述各实施方式的方法步骤,并将获取的实景图和/或位置关系发送至用户(或车载设备)进行显示或播报。
本说明书一些实施例提供为用户提供实景图的方法,以通过在导航界面 显示目标位置周围的实景图,提高目标位置识别的准确性,进而提高任务处理效率。
本说明书实施例还提供一种计算机可读存储介质。所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机实现前述的为用户显示实景图的方法对应的操作。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本说明书的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本说明书的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本说明书的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本说明书各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (24)

  1. 一种为用户提供实景图的方法,其特征在于,包括:
    基于目标位置,获取至少一个候选实景点;
    基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足;
    响应于所述预设条件被满足,从所述预设条件被满足对应的候选实景点中确定目标实景点;
    基于所述目标实景点,确定所述目标位置对应的实景图;以及在所述用户相关的导航界面上显示所述实景图。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足,包括:
    基于所述候选实景点和所述目标位置确定第一方向;
    根据所述第一方向和所述用户的所述运动方向的夹角,判断所述预设条件是否被满足。
  3. 根据权利要求2所述的方法,其特征在于,所述第一方向的起点为所述候选实景点,所述预设条件为:所述第一方向和所述用户的所述运动方向的夹角小于或等于角度阈值,所述角度阈值等于或小于90°。
  4. 根据权利要求1所述的方法,其特征在于,所述预设条件包括:
    所述候选实景点位于目标路线段中,所述目标路线段基于所述目标位置在所述用户所在路线上的投影点、所述用户的当前位置和所述运动方向确定。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足,包括:
    基于所述至少一个候选实景点与所述目标位置之间的距离大小顺序,基于所述运动方向、所述目标位置和所述候选实景点,判断所述预设要求是否被满足;
    当所述判断结果为满足时,停止判断;否则针对另外的候选实景点做出判断。
  6. 根据权利要求1所述的方法,其特征在于,所述在所述用户相关的导航界面上 显示所述实景图,包括:
    基于所述目标位置和所述目标实景点,确定所述实景图的显示方向和/或夹角;
    基于所述显示方向和/或夹角,在所述导航界面上显示所述实景图。
  7. 根据权利要求1所述的方法,其特征在于,所述在所述用户相关的导航界面上显示所述实景图,包括:
    基于所述用户的当前位置和所述目标位置,确定所述实景图的缩略或放大参数;
    基于所述缩略或放大参数,在所述导航界面上显示所述实景图。
  8. 根据权利要求1所述的方法,其特征在于,所述基于所述目标位置,获取至少一个候选实景点,包括:
    基于所述目标位置,获取至少一个待矫正候选实景点;
    使用矫正算法对所述至少一个待矫正候选实景点进行矫正,得到所述至少一个候选实景点。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    基于所述用户的当前位置和所述目标位置,判断触发条件是否被满足;并且,只有在所述触发条件被满足时确定所述目标实景点。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述目标位置在所述目标实景点所在路线上的投影点;
    根据所述运动方向和所述投影点,确定所述用户与所述目标位置的位置关系;
    提醒所述用户所述位置关系。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    计算所述用户的当前位置与所述目标实景点的距离,确定所述用户的运动进度;
    提醒所述用户所述运动进度。
  12. 一种为用户提供实景图的系统,其特征在于,包括:
    获取模块,用于基于目标位置,获取至少一个候选实景点;
    判断模块,用于基于所述目标位置、用户的运动方向和所述至少一个候选实景点,判断预设条件是否被满足;
    确定模块,用于响应于所述预设条件被满足,从所述预设条件被满足对应的候选实景点中确定目标实景点;
    显示模块,用于基于所述目标实景点,确定所述目标位置对应的实景图;以及在所述用户相关的导航界面上显示所述实景图。
  13. 根据权利要求12所述的系统,其特征在于,所述判断模块还用于:
    基于所述候选实景点和所述目标位置确定第一方向;
    根据所述第一方向和所述用户的所述运动方向的夹角,判断所述预设条件是否被满足。
  14. 根据权利要求13所述的系统,其特征在于,所述第一方向的起点为所述候选实景点,所述预设条件为:所述第一方向和所述用户的所述运动方向的夹角小于或等于角度阈值,所述角度阈值等于或小于90°。
  15. 根据权利要求12所述的系统,其特征在于,所述预设条件包括:
    所述候选实景点位于目标路线段中,所述目标路线段基于所述目标位置在所述用户所在路线上的投影点、所述用户的当前位置和所述运动方向确定。
  16. 根据权利要求12所述的系统,其特征在于,所述判断模块还用于:
    基于所述至少一个候选实景点与所述目标位置之间的距离大小顺序,基于所述运动方向、所述目标位置和所述候选实景点,判断所述预设要求是否被满足;
    当所述判断结果为满足时,停止判断;否则针对另外的候选实景点做出判断。
  17. 根据权利要求12所述的系统,其特征在于,所述显示模块还用于:
    基于所述目标位置和所述目标实景点,确定所述实景图的显示方向和/或夹角;
    基于所述显示方向和/或夹角,在所述导航界面上显示所述实景图。
  18. 根据权利要求12所述的系统,其特征在于,所述显示模块还用于:
    基于所述用户的当前位置和所述目标位置,确定所述实景图的缩略或放大参数;
    基于所述缩略或放大参数,在所述导航界面上显示所述实景图。
  19. 根据权利要求12所述的系统,其特征在于,所述获取模块还用于:
    基于所述目标位置,获取至少一个待矫正候选实景点;
    使用矫正算法对所述至少一个待矫正候选实景点进行矫正,得到所述至少一个候选实景点。
  20. 根据权利要求12所述的系统,其特征在于,所述判断模块还用于:
    基于所述用户的当前位置和所述目标位置,判断触发条件是否被满足;并且,只有在所述触发条件被满足时确定所述目标实景点。
  21. 根据权利要求12所述的系统,其特征在于,所述系统还包括提醒模块,所述提醒模块用于:
    确定所述目标位置在所述目标实景点所在路线上的投影点;
    根据所述运动方向和所述投影点,确定所述用户与所述目标位置的位置关系;
    提醒所述用户所述位置关系。
  22. 根据权利要求12所述的系统,其特征在于,所述系统还包括提醒模块,所述提醒模块用于:
    计算所述用户的当前位置与所述目标实景点的距离,确定所述用户的运动进度;
    提醒所述用户所述运动进度。
  23. 一种为用户提供实景图的装置,所述装置包括处理器以及存储器,所述存储器用于存储指令,其特征在于,所述处理器用于执行所述指令,以实现如权利要求1至11中任一项所述的为用户显示实景图的方法对应的操作。
  24. 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,所述计算机指令被处理器执行时,实现如权利要求1至11中任一项所述的为用户显示实景图的方法对应的操作。
PCT/CN2021/090486 2020-06-17 2021-04-28 一种为用户提供实景图的方法及系统 WO2021253995A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010555616.0A CN111750872B (zh) 2020-06-17 2020-06-17 信息交互方法、装置、电子设备和计算机可读存储介质
CN202010555616.0 2020-06-17

Publications (1)

Publication Number Publication Date
WO2021253995A1 true WO2021253995A1 (zh) 2021-12-23

Family

ID=72674815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090486 WO2021253995A1 (zh) 2020-06-17 2021-04-28 一种为用户提供实景图的方法及系统

Country Status (2)

Country Link
CN (1) CN111750872B (zh)
WO (1) WO2021253995A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111750872B (zh) * 2020-06-17 2021-04-13 北京嘀嘀无限科技发展有限公司 信息交互方法、装置、电子设备和计算机可读存储介质
CN111750888B (zh) * 2020-06-17 2021-05-04 北京嘀嘀无限科技发展有限公司 信息交互方法、装置、电子设备和计算机可读存储介质
CN112612798B (zh) * 2020-11-27 2024-04-12 北京百度网讯科技有限公司 引导内容更新方法、训练方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102519478A (zh) * 2011-11-16 2012-06-27 深圳市凯立德科技股份有限公司 一种街景目的地引导方法及装置
CN106969774A (zh) * 2013-04-28 2017-07-21 腾讯科技(深圳)有限公司 导航方法与装置、终端、服务器及系统
CN107024218A (zh) * 2015-12-01 2017-08-08 伟摩有限责任公司 用于自主车辆的接载区和放下区
WO2018058361A1 (en) * 2016-09-28 2018-04-05 Bayerische Motoren Werke Aktiengesellschaft Method, system, display device for displaying virtual reality in vehicle, and computer program product
CN111750888A (zh) * 2020-06-17 2020-10-09 北京嘀嘀无限科技发展有限公司 信息交互方法、装置、电子设备和计算机可读存储介质
CN111750872A (zh) * 2020-06-17 2020-10-09 北京嘀嘀无限科技发展有限公司 信息交互方法、装置、电子设备和计算机可读存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008023439B4 (de) * 2008-05-14 2011-02-17 Christian-Albrechts-Universität Zu Kiel Augmented Reality Fernglas zur Navigationsunterstützung
US8447136B2 (en) * 2010-01-12 2013-05-21 Microsoft Corporation Viewing media in the context of street-level images
US8767040B2 (en) * 2012-01-11 2014-07-01 Google Inc. Method and system for displaying panoramic imagery
KR102222336B1 (ko) * 2013-08-19 2021-03-04 삼성전자주식회사 맵 화면을 디스플레이 하는 사용자 단말 장치 및 그 디스플레이 방법
CN108088450A (zh) * 2016-11-21 2018-05-29 北京嘀嘀无限科技发展有限公司 导航方法及装置
CN108668108B (zh) * 2017-03-31 2021-02-19 杭州海康威视数字技术股份有限公司 一种视频监控的方法、装置及电子设备
US10508925B2 (en) * 2017-08-31 2019-12-17 Uber Technologies, Inc. Pickup location selection and augmented reality navigation
CN108955714A (zh) * 2018-04-28 2018-12-07 苏州车萝卜汽车电子科技有限公司 导航信息处理方法及装置、虚拟现实抬头显示装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102519478A (zh) * 2011-11-16 2012-06-27 深圳市凯立德科技股份有限公司 一种街景目的地引导方法及装置
CN106969774A (zh) * 2013-04-28 2017-07-21 腾讯科技(深圳)有限公司 导航方法与装置、终端、服务器及系统
CN107024218A (zh) * 2015-12-01 2017-08-08 伟摩有限责任公司 用于自主车辆的接载区和放下区
WO2018058361A1 (en) * 2016-09-28 2018-04-05 Bayerische Motoren Werke Aktiengesellschaft Method, system, display device for displaying virtual reality in vehicle, and computer program product
CN111750888A (zh) * 2020-06-17 2020-10-09 北京嘀嘀无限科技发展有限公司 信息交互方法、装置、电子设备和计算机可读存储介质
CN111750872A (zh) * 2020-06-17 2020-10-09 北京嘀嘀无限科技发展有限公司 信息交互方法、装置、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN111750872B (zh) 2021-04-13
CN111750872A (zh) 2020-10-09

Similar Documents

Publication Publication Date Title
WO2021253995A1 (zh) 一种为用户提供实景图的方法及系统
US11692842B2 (en) Augmented reality maps
US11275447B2 (en) System and method for gesture-based point of interest search
US11698268B2 (en) Street-level guidance via route path
WO2018157777A1 (en) Systems and methods for recommending a pick-up location
WO2021121306A1 (zh) 视觉定位方法和系统
US11237010B2 (en) Systems and methods for on-demand service
US20200158522A1 (en) Systems and methods for determining a new route in a map
US11003730B2 (en) Systems and methods for parent-child relationship determination for points of interest
WO2020207452A1 (zh) 一种导航地图中指示箭头的显示方法和系统
TWI725360B (zh) 用於確定地圖上的新道路的系統和方法
US9128170B2 (en) Locating mobile devices
WO2021253996A1 (zh) 一种为用户提供实景图的方法及系统
WO2019218335A1 (en) Systems and methods for recommending a personalized pick-up location
US20210123748A1 (en) Detecting defects in map data
WO2021253955A1 (zh) 一种信息处理方法、装置、车辆以及显示设备
WO2019084794A1 (en) Methods and systems for carpool services
WO2019228508A1 (en) Systems and methods for navigation based on intersection coding
US20220197893A1 (en) Aerial vehicle and edge device collaboration for visual positioning image database management and updating
WO2021104325A1 (zh) 一种偏航识别的方法和系统
US11321879B2 (en) Map driven augmented reality
WO2020093351A1 (en) Systems and methods for identifying a road feature
WO2022152081A1 (zh) 导航方法和装置
US20100315411A1 (en) Computing transitions between captured driving runs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21825155

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21825155

Country of ref document: EP

Kind code of ref document: A1