WO2021253996A1 - 一种为用户提供实景图的方法及系统 - Google Patents
一种为用户提供实景图的方法及系统 Download PDFInfo
- Publication number
- WO2021253996A1 WO2021253996A1 PCT/CN2021/090487 CN2021090487W WO2021253996A1 WO 2021253996 A1 WO2021253996 A1 WO 2021253996A1 CN 2021090487 W CN2021090487 W CN 2021090487W WO 2021253996 A1 WO2021253996 A1 WO 2021253996A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- real
- candidate
- image
- target
- scenic spot
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/3438—Rendez-vous, i.e. searching a destination where several users can meet, and the routes to this destination for these users; Ride sharing, i.e. searching a route such that at least two users can share a vehicle for at least part of the route
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3667—Display of a road map
- G01C21/367—Details, e.g. road map scale, orientation, zooming, illumination, level of detail, scrolling of road map or positioning of current position marker
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/40—Correcting position, velocity or attitude
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
Definitions
- the present invention relates to the field of computer technology, in particular to a method and system for providing users with real-world images.
- navigation systems With the continuous changes of urban roads, more and more users need to use navigation systems. Generally, users can follow the navigation route planned by the navigation system to reach the target location they want to go to. However, many factors affect the accuracy of the user's search for the target location, such as the inaccurate positioning of the navigation system or the user's unfamiliarity with the target location. If you can provide the user with a real-world map of the navigation destination or transit point, the navigation system can be optimized to better help the user reach the target location.
- the embodiment of this specification proposes a method and system for providing a user with a real-world image, which can accurately guide the user to the target location.
- An aspect of the embodiments of this specification provides a method for providing a user with a real-world image, including: determining at least one first real-world image corresponding to the candidate location based on a candidate location, and displaying all the images on a navigation interface related to the user. Said at least one first real picture; in response to a user's confirmation operation on one or more of the at least one first real picture, the candidate position corresponding to the confirmed first real picture is determined as the target position; at least Based on the target location, determine a target real-world spot; at least based on the target real-world spot, determine a second real-world image corresponding to the target location, and display the second real-world image on the navigation interface.
- One aspect of the embodiments of this specification provides a system for providing users with real-world images, including: a first display module, configured to determine at least one first real-world image corresponding to the candidate location based on the candidate location, and to communicate with the user The at least one first real-world image is displayed on the relevant navigation interface; the first determining module is configured to respond to the user's confirmation operation on one or more of the at least one first real-world image, and the first confirmed first image will be obtained.
- the candidate location corresponding to the real scene map is determined as the target location; the second determining module is configured to determine the target real scenic spot based at least on the target location; the second display module is configured to determine the target location based at least on the target real scenic spot Corresponding to the second real image, and displaying the second real image on the navigation interface.
- the device includes a processor and a memory.
- the memory is used to store instructions. The operation corresponding to the method of displaying the real scene image for the user is described.
- One aspect of the embodiments of this specification provides a computer-readable storage medium that stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer can realize the display of real-world images for users as described in any of the preceding items. The operation corresponding to the method.
- Fig. 1 is a schematic diagram of an application scenario of a real-view image providing system according to some embodiments of this specification
- Fig. 2 is a schematic diagram of an exemplary computing device according to some embodiments of this specification.
- Fig. 3 is a schematic diagram of exemplary hardware and/or software of a mobile device according to some embodiments of the present specification
- Figure 4 is a block diagram of a system for providing users with real-world images according to some embodiments of this specification
- FIG. 5 is a flowchart of a method for providing a user with a real-world image according to some embodiments of this specification
- Fig. 6a is an exemplary schematic diagram of displaying the first real image on the navigation interface according to some embodiments of this specification
- Fig. 6b is another exemplary diagram of displaying the first real image on the navigation interface according to some embodiments of this specification
- FIG. 7 is a flowchart of displaying each of at least one first real-view image on a navigation interface according to some embodiments of the present specification
- FIG. 8 is an exemplary schematic diagram of the display direction and angle of the first real image according to some embodiments of the present specification.
- Fig. 9 is a flowchart of a method for displaying a second real-view image on a navigation interface based on a display direction and/or angle according to some embodiments of the present specification
- FIG. 10 is an exemplary schematic diagram of the display direction and angle of the second real image according to some embodiments of the present specification.
- FIG. 11 is another flowchart of a method for displaying a second real-view image on a navigation interface based on a reduction or zoom-in parameter according to some embodiments of the present specification
- Fig. 12a is an exemplary schematic diagram of displaying a second real-view image on a navigation interface based on the display direction and/or angle according to some embodiments of the present specification; Or another exemplary schematic diagram showing the second real scene image displayed on the navigation interface by the angle;
- FIG. 13 is another flowchart of a method for providing a user with a real-world image according to some embodiments of this specification
- FIG. 14 is a schematic diagram of determining the coordinates of a target real scenic spot according to some embodiments of this specification.
- Fig. 15a is an exemplary schematic diagram of a navigation interface according to some embodiments of this specification
- Fig. 15b is another exemplary schematic diagram of a navigation interface according to some embodiments of this specification
- Fig. 16 is another flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification.
- system used in this specification is a method for distinguishing different components, elements, parts, parts, or assemblies of different levels.
- the words can be replaced by other expressions.
- GNSS Global Navigation Satellite System
- the target location Complicated road conditions nearby may cause over-point yaw, which may result in users driving around the target location for a long time but unable to accurately find the target location.
- passengers need to go to the boarding point through the terminal navigation interface to wait for the online car-hailing. If the road conditions near the target location are complicated, the passengers cannot accurately identify the pick-up point, thus making it impossible to meet with the online car-hailing. , Which in turn leads to low task processing efficiency and severely affects user experience.
- Fig. 1 is a schematic diagram of an application scenario of a real-view image providing system according to some embodiments of this specification.
- the real map providing system 100 can be applied to a map service system, a navigation system, a transportation system, a traffic service system, and the like.
- the real-view image providing system 100 can be applied to an online service platform that provides Internet services.
- the real-world image providing system 100 can be applied to online car-hailing services, such as taxi calls, express calls, private car calls, minibus calls, carpooling, bus services, driver hire, and pick-up services.
- the real-world image providing system 100 may also be applied to driving services, express delivery, takeaway, and the like.
- the real image providing system 100 may be an online service platform, including a server 110, a network 120, a terminal 130, and a database 140.
- the server 110 may include a processing device 112.
- the server 110 may be used to process information and/or data related to providing real-world images for users.
- the server 110 may be an independent server or a server group.
- the server group may be centralized or distributed (for example, the server 110 may be a distributed system).
- the server 110 may be local or remote.
- the server 110 may access information and/or data stored in the terminal 130 and the database 140 via the network 120.
- the server 110 can be directly connected to the terminal 130 and the database 140 to access the information and/or data stored therein.
- the server 110 may be executed on a cloud platform.
- the cloud platform may include one or any combination of private cloud, public cloud, hybrid cloud, community cloud, decentralized cloud, internal cloud, etc.
- the server 110 may include a processing device 112.
- the processing device 112 may process data and/or information related to providing a user with a real-world image to perform one or more functions described in this specification. For example, the processing device 112 may determine at least one first real image corresponding to the candidate position based on the candidate position. For another example, the processing device 112 may, in response to a user's confirmation operation on one or more of the at least one first real image, determine the confirmed candidate position corresponding to the first real image as the target position. For another example, the processing device 112 may determine the target real scenic spot based on the target location.
- the processing device 112 may determine the second real-world image corresponding to the target location based on the target real-world scenic spot, and display the second real-world image on the navigation interface.
- the processing device 112 may include one or more sub-processing devices (for example, a single-core processing device or a multi-core and multi-core processing device).
- the processing device 112 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction processor (ASIP), a graphics processing unit (GPU), a physical processor (PPU), a digital signal processor ( DSP), Field Programmable Gate Array (FPGA), Editable Logic Circuit (PLD), Controller, Microcontroller Unit, Reduced Instruction Set Computer (RISC), Microprocessor, etc. or any combination thereof.
- CPU central processing unit
- ASIC application specific integrated circuit
- ASIP application specific instruction processor
- GPU graphics processing unit
- PPU physical processor
- DSP digital signal processor
- FPGA Field Programmable Gate Array
- PLD Editable Logic Circuit
- Controller Microcontroller Unit
- RISC Reduced Instruction Set Computer
- the network 120 may facilitate the exchange of data and/or information, and the data or information may include real scene images (such as the first real scene image and the second real scene image) included in the database 140, the target location determined in the server 110, the target real spot, etc.
- one or more components for example, the server 110, the terminal 130, and the database 140
- the network 120 may be any type of wired or wireless network.
- the network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an internal network, an Internet network, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), and a metropolitan area network (MAN) , Public Switched Telephone Network (PSTN), Bluetooth network, ZigBee network, Near Field Communication (NFC) network, etc. or any combination thereof.
- the network 120 may include one or more network entry and exit points.
- the network 120 may include wired or wireless network access points, such as base stations and/or Internet exchange points 120-1, 120-2, ..., through these access points, one or more components of the real-world map providing system 100 can be connected To the network 120 to exchange data and/or information.
- wired or wireless network access points such as base stations and/or Internet exchange points 120-1, 120-2, ..., through these access points, one or more components of the real-world map providing system 100 can be connected To the network 120 to exchange data and/or information.
- the user of the terminal 130 may be a service provider.
- the service provider may be an online ride-hailing driver, a food delivery person, a courier, and so on.
- the user of the terminal 130 may also be a service user.
- the service user may include a map service user, a navigation service user, a transportation service user, and so on.
- the service user may be a passenger of an online ride-hailing vehicle.
- the terminal 130 may include one or any combination of a mobile device 130-1, a tablet computer 130-2, a notebook computer 130-3, a vehicle built-in device (not shown), etc.
- the mobile device 130-1 may include a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, etc., or any combination thereof.
- the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmets, smart watches, smart clothes, smart backpacks, smart accessories, etc., or any combination thereof.
- the smart mobile device may include a smart phone, a personal digital assistant (PDA), a game device, a navigation device, a POS device, etc., or any combination thereof.
- PDA personal digital assistant
- the virtual reality device and/or augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality goggles, augmented reality helmets, augmented reality glasses, augmented reality goggles, etc. or Any combination of the above.
- the built-in device of the motor vehicle may include a car navigator, a car locator, a driving recorder, etc., or any combination thereof.
- the terminal 130 may include a device with a positioning function to determine the location of the user and/or the terminal 130.
- the terminal 130 may include a device with an interface display to display a real-time image for the user of the terminal.
- the terminal 130 may include a device having an input function for the user to input a target location.
- the database 140 may store data and/or instructions. In some embodiments, the database 140 may store information obtained from the terminal 130. In some embodiments, the database 140 may store information and/or instructions for execution or use by the server 110 to perform the exemplary methods described in this specification. In some embodiments, the database 140 may store real sights (that is, location information of the real sights (for example, latitude and longitude coordinates)), real scenes corresponding to the real sights, or display angles or directions of real scenes, or correction algorithms, etc. In some embodiments, the database 140 may include mass memory, removable memory, volatile read-write memory (for example, random access memory RAM), read-only memory (ROM), etc., or any combination thereof. In some embodiments, the database 140 may be implemented on a cloud platform. For example, the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, internal cloud, etc. or any combination of the above.
- the database 140 may be connected to the network 120 to communicate with one or more components of the system 100 (for example, the server 110, the terminal 130, etc.).
- One or more components of the system 100 can access data or instructions stored in the database 140 via the network 120.
- the server 110 may obtain a real scenic spot or a real-world image corresponding to the real scenic spot from the database 140 and perform corresponding processing.
- the database 140 may directly connect or communicate with one or more components (eg, the server 110 and the terminal 130) in the system 100.
- the database 140 may be part of the server 110.
- Fig. 2 is a schematic diagram of an exemplary computing device according to some embodiments of the present specification.
- the server 110 and/or the terminal 130 may be implemented on the computing device 200.
- the processing device 112 may implement and execute the functions of the processing device 112 disclosed in this specification on the computing device 200.
- the computing device 200 may include a bus 210, a processor 220, a read-only memory 230, a random access memory 240, a communication port 250, an input/output interface 260, and a hard disk 270.
- the processor 220 can execute calculation instructions (program codes) and execute the functions of the real-world image providing system 100 described in this specification.
- the calculation instructions may include programs, objects, components, data structures, procedures, modules, and functions (the functions refer to the specific functions described in this specification).
- the processor 220 may process image or text data obtained from any other components of the real-view image providing system 100.
- the processor 220 may include a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a central processing unit (CPU) , Graphics processing unit (GPU), physical processing unit (PPU), microcontroller unit, digital signal processor (DSP), field programmable gate array (FPGA), advanced RISC machine (ARM), programmable logic device, and Any circuits, processors, etc., that perform one or more functions, or any combination thereof.
- RISC reduced instruction set computer
- ASIC application specific integrated circuit
- ASIP application specific instruction set processor
- CPU central processing unit
- GPU Graphics processing unit
- PPU physical processing unit
- DSP digital signal processor
- FPGA field programmable gate array
- ARM advanced RISC machine
- programmable logic device any circuits, processors, etc., that perform one or more functions, or any combination thereof.
- the computing device 200 in FIG. 2 only describes one processor, but it should be noted that
- the memory of the computing device 200 may store data/information acquired from any other components of the reality map providing system 100.
- exemplary ROMs may include mask ROM (MROM), programmable ROM (PROM), erasable programmable ROM (PEROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM), and digital Universal disk ROM, etc.
- Exemplary RAM may include dynamic RAM (DRAM), double rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T-RAM), zero capacitance (Z-RAM), and the like.
- the input/output interface 260 may be used to input or output signals, data or information. In some embodiments, the input/output interface 260 may enable the user to communicate with the real-view image providing system 100. In some embodiments, the input/output interface 260 may include an input device and an output device. Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, etc., or any combination thereof. Exemplary output devices may include display devices, speakers, printers, projectors, etc., or any combination thereof. Exemplary display devices may include liquid crystal displays (LCD), light emitting diode (LED)-based displays, flat panel displays, curved displays, television equipment, cathode ray tubes (CRT), etc., or any combination thereof.
- LCD liquid crystal displays
- LED light emitting diode
- CRT cathode ray tubes
- the communication port 250 can be connected to a network for data communication.
- the connection may be a wired connection, a wireless connection, or a combination of both.
- Wired connections can include cables, optical cables, or telephone lines, etc., or any combination thereof.
- the wireless connection may include Bluetooth, Wi-Fi, WiMax, WLAN, ZigBee, mobile networks (for example, 3G, 4G, or 5G, etc.), etc., or any combination thereof.
- the communication port 250 may be a standardized port, such as RS232, RS485, and so on.
- the communication port 250 may be a specially designed port.
- Fig. 3 is a schematic diagram of exemplary hardware and/or software of a mobile device according to some embodiments of the present specification.
- the mobile device 300 may include a communication unit 310, a display unit 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an input/output unit 350, a memory 360, a storage unit 370, and the like.
- the operating system 361 for example, iOS, Android, Windows Phone, etc.
- the application program 362 may include a browser or an application program for receiving text, image, audio, or other related information from the real-view image providing system 100.
- a computing device or a mobile device can be used as a hardware platform for one or more components described in this specification.
- the hardware components, operating systems, and programming languages of these computers or mobile devices are conventional in nature, and those skilled in the art can adapt these technologies to the real image providing system described in this specification after being familiar with these technologies.
- a computer with user interface elements can be used to implement a personal computer (PC) or other types of workstations or terminal devices, and if properly programmed, the computer can also act as a server.
- PC personal computer
- Fig. 4 is a block diagram of a system for providing users with real-world images according to some embodiments of the present specification.
- a system (such as the processing device 112) that provides a user with a real-world image may include a first display module 410, a first determination module 420, a second determination module 430, and a second display module 440.
- the first display module 410 may be configured to determine at least one first real scene image corresponding to the candidate position based on the candidate position, and display the at least one first real scene image on a navigation interface related to the user.
- the first display module 410 is further configured to: determine at least one first candidate real-world spot based on the candidate location; and obtain the at least one first real-world image based on the at least one first candidate real spot .
- the first display module 410 is further configured to: for each of the at least one first real-world image, determine based on the first candidate real-world spot and the candidate location corresponding to the first real-world image The display direction and/or angle of the first real-view image; based on the display direction and/or the angle, the first real-view image is displayed on the navigation interface.
- the first display module 410 is further configured to: perform preprocessing on the first real image before displaying the first real image on the navigation interface; wherein, the preprocessing includes: shrinking Omit, adjust resolution, adjust brightness, and adjust saturation for one or more of a combination.
- the first determining module 420 may be configured to respond to a user's confirmation operation on one or more of the at least one first real image, and determine the candidate position corresponding to the confirmed first real image as a target position.
- the second determining module 430 may be configured to determine a target real scenic spot based at least on the target location.
- the second determining module 430 may be further configured to: obtain at least one second candidate real spot based on the target position; and determine whether the preset condition is based on the target position and the second candidate spot spot Is satisfied; in response to the preset condition being satisfied, the target real sight spot is determined from the second candidate real sight spot corresponding to the preset condition being satisfied.
- the second determining module 430 may also be used to determine whether the distance between the target location and the second candidate real scenic spot is less than a preset threshold.
- the second determining module 430 may also be used to: obtain at least one second candidate real spot based on the target position; determine the target based on the location and movement direction of the predetermined vehicle, and the target position A real scenic spot; wherein, the predetermined vehicle is a vehicle that goes to the target location to meet the user.
- the second determining module 430 may be further configured to: obtain at least one second candidate real scenic spot to be corrected based on the target position; use a correction algorithm to correct the at least one second candidate real scenic spot to be corrected To obtain the at least one second candidate real scenic spot.
- the second display module 440 may be configured to determine a second real-world image corresponding to the target location based at least on the target real-world spot, and display the second real-world image on the navigation interface.
- the second display module 440 may also be used to determine the display direction and/or angle of the second real-world image based on the target real-world scenic spot and the target location; and based on the display direction and/or Or angle, displaying the second real scene image on the navigation interface.
- the second display module 440 may also be used to: determine a first direction and/or a first angle based on the target actual sight spot and the target location; The first angle, or the second direction and/or the second angle obtained by rotating 180° based on the first direction and/or the first angle, is used as displaying the first angle on the navigation interface 2. The direction and/or angle of the real image.
- the second display module 440 may be further configured to: in response to a display direction and/or angle switching instruction, display the second real scene image on the navigation interface according to the switched display angle and/or direction. .
- the second display module 440 may also be used to: determine the zooming or zooming parameter of the second real-view image based on the user's current position and the target position; based on the zooming or zooming parameter, The second real scene image is displayed on the navigation interface.
- system and its modules shown in FIG. 4 can be implemented in various ways.
- the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
- the hardware part can be implemented using dedicated logic;
- the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
- processor control codes for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware)
- Such codes are provided on a programmable memory or a data carrier such as an optical or electronic signal carrier.
- the system and its modules in this specification can not only be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. It can also be implemented by, for example, software executed by various types of processors, or can be implemented by a combination of the above-mentioned hardware circuit and software (for example, firmware).
- the above description of the system and its modules for providing users with real-world images is only for convenience of description, and does not limit this specification to the scope of the examples mentioned. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form a subsystem to connect with other modules without departing from this principle.
- the first display module 410, the first determination module 420, the second determination module 430, and the second display module 440 disclosed in FIG. 4 may be different modules in a system, or may be one module to implement the above two modules. Function.
- each module in a system that provides users with real-world images may share a storage module, and each module may also have its own storage module. Such deformations are all within the protection scope of this specification.
- Fig. 5 is a flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification. As shown in FIG. 5, the process 500 includes the following steps. In some embodiments, the process 500 may be executed by a processing device (for example, the processing device 112).
- a processing device for example, the processing device 112
- Step 510 Based on the candidate position, determine at least one first real scene image corresponding to the candidate position, and display the at least one first real scene image on a navigation interface related to the user. In some embodiments, this step 510 may be performed by the first display module 410.
- the candidate location refers to the candidate of the target location that the user needs to go to.
- the user ie, passenger
- the user can be any user who uses maps or navigation. For example, passengers in the shared travel field.
- the user can input the candidate position in the user terminal manually or by voice input, or select the candidate position from the recommendation list, or drag the positioning icon to determine the candidate position. This embodiment does not limit the acquisition of candidate positions.
- the first display module 410 may directly read the candidate positions from the memory.
- the candidate location may include location information.
- the location information may include, but is not limited to, name information and coordinate information.
- the coordinate information may include latitude and longitude coordinate information, for example, GNSS coordinates or GPS (Global Positioning System, Global Positioning System) coordinates.
- processing is based on candidate positions (for example, determining a distance, a position relationship, or acquiring an image, etc.), but processing can actually be performed based on the position information of the candidate position.
- certain specific nouns can include information related to the noun.
- the real scenic spot, the first candidate real scenic spot, the second candidate real scenic spot, the target real scenic spot or target location the current location of the user, the real-world image, and so on.
- operations involving specific nouns are actually performed on information related to them, and will not be described in detail later.
- the first real scene image corresponding to the candidate location refers to the image that can represent the surrounding environment of the candidate location.
- the image content in the first real image may be completely or partially the same as the surrounding environment of the candidate location.
- the first display module 410 may obtain the first real scene image based on the candidate location and historical data.
- the first display module 410 may obtain historical data from a storage device (for example, the database 140), and use the image displayed for the candidate position in the historical data as the first real scene image.
- the first real scene image is determined based on the image displayed to other users.
- the first display module 410 may obtain the first real scene image based on real sights that are related to the candidate location or whose relationship with the candidate location meets the requirements.
- the actual scenic spot refers to an actual shooting point or actual collection point for shooting a real environment (for example, a city, a business district, or a street, etc.).
- the actual point of view is a point on the road.
- a real scenic spot is determined every certain distance (for example, 5 meters), and real scene shooting is performed.
- An image taken based on a real scenic spot or an image obtained after processing the taken image is called a real scene image.
- images are collected from the real spot in various directions, that is, the real scene can be a 360° panoramic image, which can also be called a panoramic image.
- a real scene picture (or a panoramic picture) is taken around a circle in the axial direction and the longitudinal direction with the real scenic spot as the center.
- the real scene image (or panoramic image) can be obtained in advance after continuous collection and processing on the road network by a panoramic collection vehicle.
- the real scene map (or panoramic picture) uses images collected by a panoramic camera at the corresponding real scenic spot.
- the real sights and their corresponding real maps can be obtained from a predetermined map service, for example, by calling a predetermined map service through an API interface.
- the real sights and their corresponding real pictures can also be directly obtained from the storage device (for example, the database 140).
- Embedded in the terminal or server are the coordinates of multiple real sight spots and their corresponding panoramic images, that is, the real scene images can also be directly read from the terminal or server. There are other ways to obtain the real scenic spot and its corresponding real map, which is not limited in this embodiment.
- the first display module 410 may determine at least one first candidate real scenic spot based on the candidate location; and acquire at least one first real-world image based on the at least one first candidate real scenic spot.
- the first candidate real scenic spot refers to a real scenic spot related to the candidate location.
- the first candidate real scenic spot refers to a real scenic spot whose relationship with the candidate location meets a preset requirement.
- the preset requirements may include, but are not limited to: the distance between the first candidate real spot and the candidate location is less than a threshold, the distance is the smallest, and there is no obstruction between the first candidate real spot and the candidate location (for example, One or more of building or construction site, etc.). It is understandable that based on preset requirements, there may be one or more first candidate real attractions. In some cases, through the foregoing preset requirements, it is possible to make the displayed content related to the candidate position in the first real-view image sufficient, so that the user can determine whether the candidate position is the target position according to the first real-life image.
- the real scene image is obtained based on the shooting of the real scenic spot, that is, there is a corresponding real scene image of the real spot, for example, the real scene image corresponds to the location information of the real spot.
- a real map corresponding to at least one first candidate real spot can be obtained, and at least one first real map corresponding to the candidate location can be determined based on the real map of the at least one first candidate real spot .
- the real scene image of the first candidate real scenic spot is processed, and the real scene image obtained after the processing is used as the first real scene image.
- the processing includes, but is not limited to, image processing methods such as zooming in, zooming out, cropping, adjusting resolution, adjusting saturation, or adjusting brightness.
- the first display module 410 may use as the first real scene a partial image of the real scene of the first candidate real spot or the real scene obtained after the real scene of the first candidate real spot is processed at a specific angle of view. , The image under the specific viewing angle contains candidate positions.
- the first display module 410 may display one or more of the determined at least one first real image on a navigation interface related to the user. For example, if there are multiple first candidate real scenic spots, one is randomly selected or the real map of the first real scenic spot with the smallest distance from the candidate location is selected, and displayed based on the real map. For another example, display based on the real images of all the first candidate real scenic spots.
- the first real-world image displayed on the navigation interface may be part or all of the surroundings of the first candidate real-spot. Or all content.
- the first display module 410 can obtain the real scene thumbnail corresponding to the candidate position. It is understandable that the real scene thumbnail corresponding to the candidate position is taken as the first real scene corresponding to the candidate position, and the real scene thumbnail is displayed on the navigation interface. Among them, the real scene thumbnails are used to display thumbnails of street scenes around the candidate location. For example, by calling a predetermined map service to obtain the coordinates of the real scenic spot that is closest to the candidate location, and determine the real-scene thumbnail according to the coordinates of the real scenic spot. Wherein, the real scene thumbnail is a thumbnail of a street scene image collected by taking the real scenic spot as a collection point.
- the map service is called through the API interface to obtain the coordinates of the real scenic spot and the corresponding panoramic image returned by the map service according to the candidate location, and image processing is performed on the panoramic image to determine the corresponding real-scene thumbnail.
- multiple real-view point coordinates and their corresponding panoramic images may also be stored in the memory of the terminal or server, so as to determine the corresponding real-view thumbnail image according to the first candidate real-view point coordinates corresponding to the candidate position.
- Step 520 In response to the user's confirmation operation on one or more of the at least one first real image, the confirmed candidate position corresponding to the first real image is determined as the target position. In some embodiments, this step 520 may be performed by the first determining module 420.
- the target location is the location the user wants to go to.
- the boarding point determined by the user can be determined as the target location.
- passengers place an order through the passenger terminal, they need to determine the starting point and ending point of the order.
- the starting point of the order is often not the current location of the passenger terminal, but a location determined by the user to facilitate boarding. For example, a user places an order in an office building, and determines the pick-up point as the intersection closest to the office building.
- the confirmation operation may be an operation initiated by the user when the user believes that the displayed first real-world image matches or partially matches the desired location (ie, the target location). Understandably, user operations generate system instructions.
- the confirmation operation may be an operation that the user can implement through the user terminal, including but not limited to: click confirmation, voice confirmation, or trigger confirmation. Among them, click to confirm can be achieved by the user clicking the confirmation button corresponding to the first real image displayed on the interface (if there are multiple first real images displayed, the confirmation button is also set accordingly), or by directly clicking the first real image, etc. .
- the trigger confirmation operation may be related to the application scenario, in order to share the behavior example, the trigger confirmation operation may include the operation of the passenger initiating a ride order, or inputting the destination of the ride.
- the confirmation operation may also include other types, which are not limited in this embodiment. For example, the user completes a specified action (eg, nodding or shaking his head, etc.) and so on.
- the first determining module 420 may re-acquire a new candidate location or re-acquire a new first real image based on the replacement trigger condition.
- Replacement trigger conditions include: user-initiated operations to replace candidate positions, operations to replace the first real-world image, or other trigger conditions (for example, the user did not perform a confirmation operation within a preset time period, etc.).
- Regarding reacquiring a new candidate position or a new first real image it is similar to the obtaining of the candidate position and the first real image described in step 510. For example, when the passenger is not satisfied with the currently selected candidate location, the candidate location can be replaced by dragging the candidate location indicator on the navigation interface or other candidate locations in the candidate location list.
- the terminal navigation interface of the passenger includes the current position 61 of the passenger terminal, the candidate position 62 currently determined by the user, and the real scene thumbnail 63 obtained according to the candidate position 62.
- the coordinates of the real scenic spot closest to the candidate location 62 are determined, and the panoramic image corresponding to the coordinates of the real scenic spot is obtained, and the panoramic image is processed to obtain the real-scene thumbnail 63, which is rendered on the passenger's terminal navigation interface
- the real scene thumbnail 63 is displayed.
- Passengers can determine whether to determine the candidate location 62 as the target location according to the actual scene thumbnail 63. Assuming that the passenger is not satisfied with the candidate location 62, they can send a location change instruction by dragging the candidate location indicator on the navigation interface or other candidate locations in the candidate location list .
- the passenger drags the candidate location indicator to the candidate location 64, obtains the coordinates of the candidate location 64 and the coordinates of the real scenic spot closest to the candidate location 64, and obtains the panoramic image corresponding to the coordinates of the real scenic spot.
- the panoramic image is processed to obtain the real scene thumbnail 65, and the real scene thumbnail 65 is rendered and displayed on the passenger's terminal navigation interface. If a determination instruction is received at this time, the current candidate position 64 is determined as the target position.
- the passenger can determine the target location according to the real scene map (or real scene thumbnail), thereby making the target location easy to reach.
- the actual thumbnail is displayed to the user, it can be convenient for the user to view other information on the map.
- the passenger selects the candidate boarding point, the actual scene thumbnail corresponding to the candidate boarding point is determined, and the actual scene thumbnail is displayed on the navigation interface of the passenger terminal, so that the passenger can determine whether to board the candidate The pick-up point is determined as the target boarding point.
- Step 530 Determine a target actual scenic spot based at least on the target location. In some embodiments, this step 530 may be performed by the second determining module 430.
- the second determining module 430 may determine a target real spot whose relationship with the target position satisfies a preset condition based on the target position.
- the second determining module 430 may read the actual scenic spot from the storage device, or call a map service to obtain the actual scenic spot, and determine whether the relationship between the read or acquired actual scenic spot and the target location meets a preset condition.
- the second determining module 430 obtains at least one second candidate real spot based on the target location; determines whether the preset condition is satisfied based on the target location and the second candidate spot spot; in response to the preset condition being satisfied, The target real spot is determined from the second candidate spot spot corresponding to the preset condition being satisfied.
- the second candidate real scenic spot refers to a real scenic spot related to the target location.
- the second candidate real point of view may be a real point of view around the target location.
- the second candidate real scenic spot may be a real scenic spot within a determined area with a target location as the center and a set distance (for example, 5m, 10m, etc.) as a radius.
- the second candidate real scenic spot may be a real scenic spot around the target location, and the corresponding real-world image is praised by the user.
- the second candidate real scenic spot may be a real scenic spot ranked in Top n (n can be a positive integer such as 3, 5, 10, etc.) by a distance from the target location, where the ranking is in the order of distance from shortest to farthest.
- the second candidate real scenic spot may be a real scenic spot around the target location that has a certain degree of conformity with the user portrait.
- the user portrait is a description of the user, including gender, age, occupation, or preferences.
- one of the real sights around the target location may be a real sight that is closer to the shopping mall as the second candidate real sight.
- the second determining module 430 may read the second candidate real-estate point from the storage device, or call a map service to obtain the second candidate real-estate point.
- the preset condition may be that the distance from the target location is less than a preset threshold (e.g., 2m, 3m, etc.). In some embodiments, the preset condition may be that the distance from the target location is the smallest among all the collected real sights or at least one second candidate real sight. For example, the distance between all real scenic spots or at least one second candidate real scenic spot and the target location is sorted from near to far, and top1 is used as the target real scenic spot.
- step 540 is closer, so that the user can confirm whether the target position is reached. Moreover, if the user needs to go to the target spot to observe whether the target location is reached, the above-mentioned preset conditions related to the distance can shorten the time for the user to confirm as much as possible.
- user-related information when determining whether the preset condition is satisfied, in addition to considering the target location and the second candidate real scenic spot (or real scenic spot) are related, user-related information can also be considered.
- the user-related information includes but is not limited to: the user's current location, the user's movement direction, and/or the user's portrait, etc. For example, whether the direction formed by taking the target location as the starting point and the real scenic spot or the second candidate real scenic spot as the end point, and the user's movement direction will be less than a preset angle threshold (for example, 20°, 50°, 75°, 90° Etc.), less than the preset condition is met, otherwise it is not met.
- a preset angle threshold for example, 20°, 50°, 75°, 90° Etc.
- the direction formed by taking the target location as the starting point and the real scenic spot or the second candidate real scenic spot as the end point is considered whether the included angle with the user's movement direction is less than the preset angle threshold.
- the target location is determined, and the real image of the target real scenic spot or the real image obtained after processing the real image of the target real spot is used as the real image corresponding to the target location. Displayed to the user, so that the user does not need to turn back to observe whether the current actual environment is close to the displayed real-world image during the exercise, that is, to help the user find the target location or determine whether the current route is available while ensuring the user’s safety Yaw etc.
- the user may be in real time or every predetermined time (for example, 1 min, etc.), based on the updated direction of the user's movement, to update the target spot.
- predetermined time for example, 1 min, etc.
- the information of the predetermined vehicle when determining the target real scenic spot, may also be considered.
- the information of the predetermined vehicle includes, but is not limited to: the current location of the predetermined vehicle, the driving direction of the predetermined vehicle, and so on.
- the target real spot may be determined from at least one second candidate real spot based on the predetermined vehicle's moving direction, the second candidate real spot, and the target location. Specifically, based on the direction determined by the second candidate real scenic spot and the target location, and the angle between the predetermined vehicle movement direction, it is determined whether the preset condition is satisfied, and the target real scene is determined from the second candidate real scenic spot corresponding to the preset condition point. For example, it is judged whether the angle between the direction determined with the second candidate real spot as the starting point and the target position as the end point and the predetermined vehicle movement direction is less than or equal to the threshold angle, and if yes, the preset condition is satisfied.
- the angle threshold may be less than or equal to 90°.
- the second candidate real sight spot with the smallest distance from the target location may be used as the target real sight spot.
- the scheduled vehicle may be a vehicle driven by a single driver in a shared trip
- the actual images displayed by both parties are the same.
- the safety angle of view can be 0°-180° in front.
- the displayed real-life image should be as close as possible to the real environment of the target location.
- the target real scenic spot the vehicle can be considered at the same time.
- the target real-world spot is determined, and the second real-world image determined based on the target real spot is displayed to the driver and user of the predetermined vehicle.
- the driver of the predetermined vehicle can find the target location based on the displayed real map by observing the actual environment forward. That is, not only is it convenient for the user to determine whether the target position is reached based on the displayed real-world image, but also the safe driving of the driver of the predetermined vehicle is ensured.
- the relevant information of the vehicle traveling road that meets the conditions around the target location may also be considered, for example, the prescribed vehicle traveling direction, road location, etc.
- a vehicle driving road that satisfies the conditions refers to a road where the user walks to the road where the walking cost is the smallest or is less than a predetermined value.
- the walking cost can be measured by the walking distance, whether it is necessary to cross the road (ie, crossing the zebra crossing), and so on.
- the preset condition is satisfied, and the target real scenic spot is determined from the second candidate real scenic spot corresponding to the predetermined condition.
- the judgment method or the preset condition is similar to the above based on the predetermined vehicle movement direction, and will not be repeated here.
- map matching matching the location information to the road network
- location information e.g., coordinate information
- positioning technology e.g., GPS or GNSS positioning systems
- the location on the road network may be Not displayed on the road network (for example, in a house or pond beside the road)
- call a predetermined map service through the API interface to obtain the actual spot coordinates returned by the predetermined map service according to the target location.
- the coordinates returned by the map service are The time may not be accurate. In some cases, the inaccurate coordinates of the actual scenic spot may result in inaccuracy in the subsequent determination of the actual scene map, which may further lead to the delivery of wrong information to the user and affect the user experience.
- the real sights, the first candidate real sights, the second candidate real sights, and the target real sights are real sights obtained after correction.
- after obtaining or reading the real spot from a storage device or a map service, etc. it is corrected, and the subsequent steps are performed after the correction, for example, determining the first candidate spot, the second candidate spot, and / Or target actual attractions, etc.
- the second determining module 430 may obtain at least one second candidate real scenic spot to be corrected based on the target location; use a correction algorithm to correct at least one second candidate real scenic spot to be corrected, and obtain at least one second candidate real scenic spot point.
- the second candidate real scenic spot to be corrected refers to the real scenic spot determined based on the target location among the real scenic spots obtained based on the positioning technology. For example, call a predetermined map service through the API interface, and obtain the actual spot coordinates returned by the predetermined map service according to the target location.
- the correction algorithm may refer to an algorithm that associates the real scenic spot to be corrected (for example, the second candidate real scenic spot to be corrected) to the road network.
- the correction algorithm can be used to associate the GNSS or GPS coordinates returned by the map service to the road network of the map, that is, to convert the coordinate sequence sampled by the GNSS or GPS to the road network coordinate sequence to correct the coordinates returned by the map service.
- the correction algorithm may be an algorithm that associates the real scenic spot to be corrected with the nearest route.
- the real scenic spot to be corrected is directly projected onto the nearest route, and the projection point is used as the corrected real scenic spot. (For example, the second candidate real scenic spot).
- the route is a component of the road network in the map, and the route can be regarded as the road in the map.
- the correction algorithm may be Hidden Markov Model (HMM), ST-Matching algorithm, or IVVM ((An Interactive-Voting Based Map Matching Algorithm) algorithm, etc.).
- Step 540 Determine a second real scene corresponding to the target location based at least on the target real scenic spot, and display the second real scene on the navigation interface. In some embodiments, this step 540 may be performed by the second display module 440.
- the second display module 440 may obtain the real-world map of the target real-world spot, and determine the second real-world map corresponding to the target location based on the real-world map of the target real spot.
- a predetermined map service is called according to an API interface, and a real-world image (or panoramic image) returned by the predetermined map service according to the coordinates of the target real scenic spot is obtained.
- the returned real scene image (or panoramic image) may be an image or a panoramic image collected in various directions from the target real scenic spot as the collection point.
- the real scene image corresponding to the target real scenic spot is directly read from the storage device.
- the second display module 440 may directly use the real-world image of the target real scenic spot as the real-world image corresponding to the target location. In some embodiments, after the second display module 440 acquires the real scene image of the target real scenic spot, it may process the real scene picture and use the processed real scene picture as the second real scene picture corresponding to the target location.
- the processing includes, but is not limited to, image processing methods such as zooming in, zooming out, cropping, adjusting resolution, adjusting saturation, or adjusting brightness.
- the second display module 440 may use the real image of the target real spot or a partial image of the real image obtained after processing the real spot of the target real spot in a specific viewing angle as the second real image.
- the specific visual angle of view may be determined based on the predetermined vehicle movement direction, the target position, and the target actual spot. For example, the angle between the first vector and the second vector is determined as a specific visual angle, where the first vector is determined by the candidate real sights and the target location, and the second vector is determined by the predetermined vehicle movement direction. In some embodiments, the specific visual angle of view may be determined based on the target location and the target actual spot.
- the second real scene displayed on the navigation interface may be all or part of the scene of the target real spot, that is, the second real picture displayed can be part or all of the real picture of the target spot .
- the second real image can be pre-processed, such as viewing angle adjustment, zooming or zooming in, adjusting resolution, and so on.
- viewing angle adjustment please refer to Figure 9 and Figure 11 of this specification and related content.
- an operation instruction from the user can also be received, and the second real-life image displayed to the user can be adjusted according to the user's operating instruction, for example, an instruction to change the real image
- the real image shows the adjustment instructions of the viewing angle, the adjustment instructions of the viewing distance, the instructions for zooming in and out, etc., so as to provide the corresponding real image for the user according to the needs. Showing the second real image to the user through the navigation interface helps guide the user to the target location, or helps the user confirm whether the target location has been reached, etc.
- Fig. 7 is a flowchart of displaying each of at least one first real-view image on a navigation interface according to some embodiments of the present specification. As shown in FIG. 7, the process 700 includes the following steps. In some embodiments, the process 700 may be executed by the first display module 410 of the processing device (for example, the processing device 112).
- Step 710 Determine a display direction and/or angle of the first real-world image based on the first candidate real-world spot and candidate location corresponding to the first real-world image.
- the real image display contains two angles: horizontal angle of view and pitch angle.
- the horizontal viewing angle may be the angle at which the real image is displayed in the established two-dimensional plane based on the horizontal ground.
- the true north direction is 0° and the true south direction is 180°.
- the horizontal viewing angle can also be referred to as the horizontal display direction.
- the pitch angle refers to the angle with the horizontal ground.
- the real scene image of the real spot may be a 360° panoramic image. If the 360° panoramic image of the first candidate target real spot is determined as the first real picture corresponding to the candidate location, the first real picture can be displayed by displaying the direction and/or angle, It is convenient for the user to observe in combination with the displayed real-world image and determine the candidate position as the target position.
- the horizontal display direction of the first real-world image may be based on the first candidate real-world spot corresponding to the first real-world image as a starting point, and the candidate location is a direction formed by connecting points.
- the horizontal viewing angle of the first real image may be the direction formed by the first candidate real spot corresponding to the first real image as the starting point and the candidate position as the connecting point, and the angle with the initial direction.
- the initial direction refers to the default or set initial direction (for example, true north direction) for the map service to display the real image.
- the initial direction can be set according to specific application scenarios, which is not limited in this embodiment.
- the pitch angle may be a default value, such as 0°, etc., or determined based on user selection.
- Step 720 based on the display direction and/or angle, display the first real scene image on the navigation interface related to the user.
- the first display module 410 may display the first real scene image based on the determined horizontal display direction. For example, the first real image is displayed by rotating the first real image from the initial direction to the determined horizontal display direction.
- the first display module 410 may display the first real scene image based on the determined horizontal angle. For example, the real scene image is rotated from the initial direction by the determined horizontal viewing angle to display the first real scene image.
- the first real image before displaying the first real image, may be preprocessed.
- the preprocessing may include one or a combination of: shrinking, cropping, adjusting resolution, adjusting brightness, and adjusting saturation.
- thumbnailing may refer to reducing the first real-scene image, and the first real-scene image after the reduction process can be referred to as a real-scene thumbnail.
- the user can operate on the displayed first real image, so as to provide the user with a corresponding real image according to the user's needs, which is helpful for the user to determine the target location. For example, rotate to other directions or angles. For another example, zoom in or zoom out the image, for example, the user can obtain the real scene image by clicking on the real scene thumbnail.
- C1 is a candidate location
- B1 and B2 are the first candidate real sights
- a is the initial direction for displaying the real image. If the real image corresponding to B1 is displayed as the first real image, the direction b1 determined by B1 and C1 is used as the horizontal display direction, or the angle ⁇ 1 between b1 and a is used as the horizontal viewing angle.
- the direction determined by B2 and C1 is used as the horizontal display direction, or the angle ⁇ 2 between b2 and a is used as the horizontal viewing angle.
- Fig. 9 is a flowchart of a method for displaying a second real-view image on a navigation interface based on a display direction and/or angle according to some embodiments of the present specification. As shown in FIG. 9, the process 900 includes the following steps. In some embodiments, the process 900 may be executed by the second display module 440 of the processing device (for example, the processing device 112).
- Step 910 Determine the display direction and/or angle of the second real scene image based on the target real sight spot and the target position.
- the real image of the real spot can be a 360° panoramic image
- the 360° panoramic image of the target real spot is determined as the second real image corresponding to the target location
- the second real image is displayed based on the display direction and/or angle, which is convenient for the user to combine and display Observe the real image, guide the user to the target location or confirm whether the target location has been reached.
- the horizontal display direction for displaying the second real scene image may be a first direction determined based on the target real sight spot and the target location.
- the first direction is based on the direction formed by taking the target actual scenic spot as the starting point and taking the target location as the connecting point.
- the first direction is based on the direction formed by taking the target location as the starting point and taking the target actual scenic spot as the connecting point.
- the display direction of the first real image may be a second direction determined by rotating the first direction by 180°.
- the horizontal angle of the second real scene image may be a first included angle determined based on the target real sight spot and the target position.
- the first included angle is the direction formed by taking the target real scenic spot as the starting point and the target location as the connecting point, and the included angle with the initial direction shown in the real image.
- the first included angle is the direction formed by taking the target location as the starting point and the target real scenic spot as the connecting point, and the included angle with the initial direction shown in the real scene image.
- the horizontal viewing angle of the second real image may be a second included angle determined by rotating the first included angle by 180°.
- the pitch angle may be a default value, for example, 0°, etc., or determined based on user selection.
- Step 920 based on the display direction and/or angle, display a second real scene image on the navigation interface.
- the second display module 440 may display the second real image based on the determined horizontal display direction. For example, rotating the second real image from the initial direction to the determined horizontal display direction to display the second real image.
- the second display module 440 may display the second real scene image based on the determined horizontal angle. For example, the real scene image is rotated from the initial direction by the determined horizontal viewing angle to display the second real scene image.
- the second real image before displaying the second real image, may be pre-processed.
- the preprocessing may include one or a combination of zooming in, cropping, adjusting resolution, adjusting brightness, and adjusting saturation. It is understandable that displaying the zoomed-in real image helps the user to compare the displayed image with the current actual environment.
- the second display module 440 may also respond to the display direction and/or angle switching instruction, and display the second real scene image on the navigation interface according to the switched display angle and/or direction.
- the switching instruction may include a variety of user operation initiation related to switching, and user operations related to switching include: click switching, mobile switching, rotating switching, manual input switching, or voice switching.
- the click switch can be that the user clicks on any point in the second real-view image displayed on the navigation interface. By clicking switch, the navigation interface can switch to the direction or angle of the clicked point.
- the mobile switching can be the second real-world image or the navigation interface displayed on the user's mobile navigation interface.
- the rotation switch may be the user rotating and moving the second real-world image displayed on the navigation interface to display the second real-world image of the rotation direction or the rotation angle.
- Manual input switching or voice switching can be manual input or voice input to display the direction and/or angle.
- A1 is the target real scenic spot
- L1 is the target location
- a is the initial direction shown in the real image.
- the horizontal display direction is the direction r1 formed by the target real scenic spot A1 as the starting point and the target location L1 as the connection point, or the horizontal viewing angle can be the angle ⁇ between a and r1.
- the horizontal display direction can also be r2 obtained by rotating r1 by 180°
- the horizontal viewing angle can also be (180°+ ⁇ ) or ⁇ obtained by rotating r1 by 180°.
- rotate the real image from a to r2 Or rotate clockwise (180°+ ⁇ ) from a or ⁇ counterclockwise to display.
- Fig. 11 is another flowchart of a method for displaying a second real-view image on a navigation interface based on a reduction or enlargement parameter according to some embodiments of the present specification.
- the process 1100 includes the following steps.
- the process 1100 may be executed by the second display module 440 of the processing device (for example, the processing device 112).
- Step 1110 based on the user's current location and target location, determine the thumbnail or zoom parameter of the second real-view image.
- the user's current location can be obtained through positioning technology.
- the current location of the user may be obtained through a user terminal or a positioning device.
- the user terminal may be a passenger mobile terminal or other vehicle-mounted equipment.
- the current location of the user may also be obtained by calling the corresponding map service according to the API interface, which is not limited in this embodiment.
- the second display module 440 may determine the zooming or zooming parameter of the second real-view image based on the distance between the user's current position and the target position.
- the size of the abbreviated parameter is proportional to the size of the distance.
- the size of the enlarged parameter is inversely proportional to the size of the distance.
- the reduction or enlargement parameter is a reduction parameter or enlargement parameter relative to the size of the original image, and the parameter may be a multiple, a ratio, and the like.
- the zooming or zooming parameter at a specific distance may be determined based on a zooming algorithm.
- zoom in and out of the second real image displayed on the page In some cases, the farther the user is from the target location, the user may have greater demand for viewing information such as routes and road conditions on the navigation page. At this time, the size of the second real image displayed can be performed under the premise of ensuring that it can be clearly displayed. Reduced processing, or displayed at a smaller magnification. The closer the user is to the target location, the greater the user's need to use the real image, and the displayed real image can be enlarged.
- Step 1120 based on the zooming or zooming in parameters, display the second real scene image on the navigation interface.
- the second real-view image may be displayed on the navigation interface.
- the second display module 440 may zoom the second real scene image in real time during the display process, and then display the zoomed second real scene image on the navigation interface, or display the zoomed second real scene image as a certain The horizontal viewing angle and pitch angle of the display are displayed on the navigation interface.
- the second display module 440 may store the second real-world image corresponding to the thumbnail or zoom-in parameters at different distances in the storage device, and during the display process, directly read the corresponding second real-world image according to the distance for display.
- the size of the second real-view image can also be adjusted according to the user's operation instruction, so as to determine the size of the real-view image displayed to the user according to the user's needs, so as to ensure the user's experience.
- the target location is 121.
- the user's current location is 122
- the user's current location is far from the target location
- the displayed second real image is 123.
- the user's current location is 124
- the user's current location is closer to the target location
- the second real-world image displayed is 125. It can be seen that the 125 image is larger than the 123 image, and the occlusion information is more.
- the display module may also determine a corresponding image preprocessing method according to the zooming or zooming parameter to ensure that the image displayed on the navigation interface is clear. For example, if the magnification is greater than the threshold (for example, 1x), the image is sharpened to make the outline of the image clear.
- the threshold for example, 1x
- Fig. 13 is another flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification. As shown in FIG. 13, the process 1300 includes the following steps. In some embodiments, the process 1300 may be executed by a processing device (for example, the processing device 112).
- a processing device for example, the processing device 112
- Step 1310 Determine the target location.
- this embodiment determines the boarding point determined by the user as the target location.
- this step may be performed by the first determining module 420.
- Step 1320 In response to the operation of the real scene thumbnail on the navigation interface, the predetermined vehicle movement direction and the coordinates of the first candidate real scene around the target position are acquired. In some embodiments, this step may be performed by the second determining module 430.
- Step 1330 Determine whether the predetermined condition is satisfied based on the second candidate real scenic spot coordinates and the predetermined vehicle movement direction, and in response to the satisfaction, determine the second candidate real scenic spot coordinates as the target real scenic spot coordinates corresponding to the target location. In some embodiments, this step may be performed by the second determining module 430.
- the scheduled vehicle may be the vehicle of the online car-hailing driver who handles the task.
- the predetermined vehicle movement direction can be obtained from the online car-hailing platform or server, or the map service can be called according to the API interface to obtain the predetermined vehicle movement direction, or the predetermined vehicle can be obtained from the terminal of the predetermined vehicle (for example, the driver terminal or other on-board equipment, etc.)
- the direction of movement is not limited in this embodiment. It should be understood that in response to the operation of the real scene thumbnail on the navigation interface, the predetermined vehicle movement direction and the second candidate real spot coordinates around the target location can be acquired at the same time, or the predetermined vehicle movement direction can be acquired first, and then the information around the target location can be acquired.
- the coordinates of the second candidate real scenic spot, or the coordinates of the second candidate real scenic spot around the target location are acquired first, and then the predetermined vehicle movement direction is acquired, which is not limited in this embodiment.
- the coordinates of the second candidate real scenic spot around the target location are sequentially acquired until a predetermined condition is satisfied. It is easy to understand that the closer the coordinates of the real scenic spot are to the target location, the easier it is to determine the target location based on the real-world map corresponding to the coordinates of the real scenic spot.
- the second candidate real scenic spot coordinates can be sequentially obtained from near to far, until the second candidate real scenic spot coordinates are found that meets the predetermined condition.
- the second candidate real point of view coordinates are corrected coordinates. See step 530 for details about correction.
- the predetermined condition is that the angle between the first vector and the second vector is less than or equal to the angle threshold.
- the first vector is determined by the coordinates of the second candidate real sight spot and the target position
- the second vector is determined by the predetermined vehicle movement direction.
- the vector starting point of the first vector is the second candidate real spot coordinate
- the vector starting point of the second vector is the current position of the predetermined vehicle, for example, the angle threshold is 90°.
- this embodiment does not limit the vector starting point of the first vector and the second vector, and the angle threshold may be determined according to the vector starting point or the vector end point of the first vector and the second vector.
- the direction of movement of the terminal may be the direction of movement of the terminal on each section of the navigation route.
- the predetermined vehicle movement direction e and the coordinates of the second candidate real spot A1 closest to the target position L1 are acquired.
- second candidate real sights A1-A3 around the target location L1 (for example, within the first threshold range centered on the target location L1), where the distance between the second candidate real sights A1-A3 and the target location L1 is reduced by And Yuan is the second candidate real scenic spot A1, the second candidate real spot A2, and the second candidate real spot A3 in order.
- Determine the first vector a1 according to the coordinates of the second candidate real spot A1 and the target position L1 determine the second vector b according to the terminal motion direction e, calculate the angle ⁇ between the first vector a1 and the second vector b, the angle ⁇ is greater than 90 °, when the predetermined vehicle moves to the second candidate real scenic spot A1, the driver needs to observe diagonally to the rear to discover the target location L1 and the surrounding streetscape, which will affect safety.
- the second candidate real spot A1 does not meet the predetermined conditions, and then the coordinates of the second candidate real spot A2 that is closer to the target location L1 are obtained, according to the second candidate
- the coordinates of the real spot A2 and the target position L1 determine the first vector a2, and the angle ⁇ between the first vector a2 and the second vector b is calculated, and the angle ⁇ is less than 90°. Therefore, when the terminal moves to the second candidate real spot A2 When the driver looks forward, he can find the target location L1 and the surrounding street scenes, and the coordinates of the second candidate real scenic spot A2 can be determined as the target real scenic spot coordinates.
- the street view image displayed on the navigation interface of the driver terminal and the navigation interface of the passenger terminal can be determined by the same panoramic image, which can ensure that the driver and passengers can arrive accurately The target position will not be deviated due to the difference of the street view on the navigation map.
- Step 1340 Determine the panoramic image corresponding to the target location according to the coordinates of the target real scenic spot. In some embodiments, this step may be performed by the second display module 440.
- the actual scene thumbnail displayed on the navigation interface simply displays part of the information around the target location so that passengers can confirm the target location.
- the street view around the location is used to determine the target location, and then the real image on the navigation interface can be operated, so that the navigation interface displays a panoramic image, so that passengers can be accurately guided to the target location.
- Step 1350 display the panoramic image on the navigation interface.
- this step may be performed by the second display module 440.
- the navigation interface may be a navigation interface of a passenger mobile terminal.
- the panoramic image in response to the display angle switching instruction, is displayed on the navigation interface according to the updated display angle.
- the passenger can rotate and move the panoramic image on the navigation interface to send an angle switching instruction to display the panoramic image at various display angles. In this way, passengers can not only observe the street view around the target location from the driver’s perspective, but also observe the street view in all directions around the target location by rotating and moving the panorama, so that passengers can be in a more complex or unfamiliar environment Accurately identify the target location.
- the navigation interface includes the user's current position 151, a target position 152, a target actual spot 153, a panoramic image 154 at the initial display angle, a predetermined vehicle position 155, and a predetermined vehicle navigation route 156.
- the predetermined vehicle movement direction and the second candidate real spot coordinates corresponding to the target position are acquired, and the target is determined from at least one second candidate real spot coordinate according to the predetermined terminal movement direction and the target position 152
- the coordinates of the real spot 153 are obtained, and the panoramic image corresponding to the target real spot 153 is obtained, the display angle is determined according to the preset initial direction and the second vector corresponding to the predetermined vehicle movement direction, and the image of the panoramic image at the display angle is displayed in the navigation Interface.
- Passengers can rotate and move the panoramic image on the navigation interface to send an angle switching instruction to display the panoramic image under various display angles. As shown in FIG. 15b, the panorama 154 in the navigation interface is rotated or moved to display the panorama 157 in other display angles.
- navigation interface in the embodiment of the present invention is only illustrative, and the information of the navigation interface can be set according to specific application scenarios, which is not limited in this embodiment.
- Fig. 16 is another flowchart of a method for providing a user with a real-world image according to some embodiments of the present specification. As shown in FIG. 16, the flowchart 1600 includes the following steps:
- Step S1 Determine candidate positions.
- Step S2 Obtain a real scene thumbnail corresponding to the candidate position.
- Step S3 Display the actual scene thumbnail on the navigation interface.
- Step S4 whether to determine the current candidate position as the target position, if not, execute step S1, that is, re-determine the candidate position, if yes, execute step S5.
- the current candidate position is determined as the target position.
- the candidate position is re-determined.
- Step S5 in response to the operation of the real scene thumbnail corresponding to the target position, obtain the predetermined vehicle movement direction, and call a predetermined map service according to the API interface to obtain at least one coordinate returned.
- a predetermined map service is invoked to obtain the coordinates of the actual scenic spot within a radius with the target location as the center and the first threshold value.
- the first threshold may be 50m. It should be understood that the first threshold may be determined according to actual application scenarios, which is not limited in this embodiment.
- Step S6 Determine the coordinate closest to the target position among the returned at least one coordinate.
- the at least one coordinate returned from the predetermined map service is sorted according to the distance from the target location from near to far, a coordinate sequence is obtained, and the coordinate closest to the target location is determined from the coordinate sequence.
- step S7 the coordinates are corrected according to the correction model or algorithm, and the corresponding second candidate real scenic spot coordinates are obtained.
- step S8 a first vector is determined according to the coordinates of the second candidate real sight spot and the target position, and a second vector is determined according to the predetermined vehicle movement direction.
- Step S9 Calculate the angle between the first vector and the second vector.
- the size of the included angle is determined by calculating the cosine of the included angle between the first vector and the second vector.
- Step S10 Determine whether the angle between the first vector and the second vector is less than or equal to the angle threshold. If the angle between the first vector and the second vector is less than or equal to the angle threshold, perform step S12. If the included angle of is greater than the angle threshold, step S11 and step S7-step S10 are executed. Optionally, the angle threshold is 90°.
- Step S11 Acquire the next coordinate that is closer to the target position. Optionally, obtain the next coordinate that is closer to the target position from the foregoing coordinate sequence.
- this embodiment is described by taking as an example all the coordinates of the actual scenic spot within the first threshold range with the target location as the center from the predetermined map service.
- the predetermined map service can be called When the coordinates of a real spot are acquired, when the coordinates of the real spot do not meet the predetermined conditions, the predetermined map service is called again to obtain the coordinates of the next real spot that is closer to the target location, until the real spot coordinates that meet the predetermined conditions are obtained.
- this embodiment corrects the coordinates of the actual scenic spot returned by the predetermined map service in turn according to the correction model or algorithm, that is, first corrects the coordinates of the actual scenic spot closest to the target location, and the coordinates of the selected actual scenic spot closest to the target location are not satisfied. After the conditions are met, the coordinates of the next real scenic spot will be corrected.
- Step S12 Determine the second candidate real scenic spot coordinates as the target real scenic spot coordinates.
- Step S13 Determine the panoramic image corresponding to the target location according to the coordinates of the target real scenic spot.
- Step S14 Determine the display angle according to the angle between the initial direction shown in the real image (or panoramic image) and the second vector.
- step S15 the panoramic image is displayed on the navigation interface according to the display angle.
- step S16 in response to the display angle switching instruction, the panoramic image is displayed on the navigation interface according to the updated display angle.
- Step S17 Receive a reset instruction, and execute step S15, that is, after receiving the reset instruction, display the panoramic image on the navigation interface according to the display angle before the update.
- this embodiment does not limit the terminal that executes the method.
- the method steps of the foregoing embodiments of the embodiment of the present invention can be embedded in a passenger mobile terminal, so that the passenger mobile terminal executes the method steps of the foregoing embodiments to implement the present invention.
- the method steps of the foregoing implementation manners of the embodiments of the present invention can also be stored in the corresponding server, so that the method steps of the foregoing implementation manners are executed by the processor of the server, and the obtained real-scene thumbnails and/or panoramas are sent to The passenger mobile terminal displays or broadcasts.
- the embodiment of this specification also provides a device for providing a user with a real-world image.
- the device includes a processor and a memory.
- the memory is used to store instructions. The operation corresponding to the method of displaying the real image for the user.
- the embodiment of this specification also provides a computer-readable storage medium.
- the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer implements the operations corresponding to the method of displaying real scene images for the user as described in any of the preceding items.
- a computer storage medium may contain a propagated data signal containing a computer program code, for example on a baseband or as part of a carrier wave.
- the propagated signal may have multiple manifestations, including electromagnetic forms, optical forms, etc., or suitable combinations.
- the computer storage medium may be any computer readable medium other than the computer readable storage medium, and the medium may be connected to an instruction execution system, device, or device to realize communication, propagation, or transmission of the program for use.
- the program code located on the computer storage medium can be transmitted through any suitable medium, including radio, cable, fiber optic cable, RF, or similar medium, or any combination of the above medium.
- the computer program codes required for the operation of each part of this manual can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional programming languages such as C language, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
- the program code can run entirely on the user's computer, or as an independent software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing equipment.
- the remote computer can be connected to the user's computer through any network form, such as a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
- LAN local area network
- WAN wide area network
- SaaS service Use software as a service
- numbers describing the number of ingredients and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "approximately”, “approximately” or “substantially” in some examples. Retouch. Unless otherwise stated, “approximately”, “approximately” or “substantially” indicates that the number is allowed to vary by ⁇ 20%.
- the numerical parameters used in the description and claims are approximate values, and the approximate values can be changed according to the required characteristics of individual embodiments. In some embodiments, the numerical parameter should consider the prescribed effective digits and adopt the method of general digit retention. Although the numerical ranges and parameters used to confirm the breadth of the ranges in some embodiments of this specification are approximate values, in specific embodiments, the setting of such numerical values is as accurate as possible within the feasible range.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Navigation (AREA)
Abstract
一种为用户提供实景图的方法,该方法包括:基于候选位置,确定与候选位置对应的至少一个第一实景图,以及在与用户相关的导航界面上显示至少一个第一实景图(510);响应用户对至少一个第一实景图中一个或多个的确认操作,将得到确认的第一实景图对应的候选位置确定为目标位置(520);至少基于目标位置,确定目标实景点(530);至少基于目标实景点,确定目标位置对应的第二实景图,以及在导航界面上显示第二实景图(540)。实施例通过在导航界面显示目标位置的实景图,可以提高用户识别目标位置的准确性。
Description
交叉引用
本说明书要求2020年06月17日提交的中国申请号202010555449.X的优先权,全部内容通过引用并入本文。
本发明涉及计算机技术领域,特别涉及一种为用户提供实景图的方法及系统。
随着城市道路的不断变化,越来越多的用户需要使用导航系统,通常,用户可以按照导航系统规划好的导航路线到达想要前往的目标位置。但是,很多因素会影响用户寻找目标位置的准确性,例如导航系统的定位不准确或者用户对目标位置不熟悉等。如果能够为用户提供导航目的地或中转地的实景图,则可以优化导航系统,更好的帮助用户到达目标位置。
为此,本说明书实施例提出一种为用户提供实景图的方法及系统,可以准确引导用户到达目标位置。
发明内容
本说明书实施例的一个方面提供一种为用户提供实景图的方法,包括:基于候选位置,确定与所述候选位置对应的至少一个第一实景图,以及在与用户相关的导航界面上显示所述至少一个第一实景图;响应用户对所述至少一个第一实景图中一个或多个的确认操作,将得到确认的所述第一实景图对应的所述候选位置确定为目标位置;至少基于所述目标位置,确定目标实景点;至少基于所述目标实景点,确定所述目标位置对应的第二实景图,以及在所述导航界面上显示所述第二实景图。
本说明书实施例的一个方面提供一种为用户提供实景图的系统,包 括:第一显示模块,用于基于候选位置,确定与所述候选位置对应的至少一个第一实景图,以及在与用户相关的导航界面上显示所述至少一个第一实景图;第一确定模块,用于响应用户对所述至少一个第一实景图中一个或多个的确认操作,将得到确认的所述第一实景图对应的所述候选位置确定为目标位置;第二确定模块,用于至少基于所述目标位置,确定目标实景点;第二显示模块,至少基于所述目标实景点,确定所述目标位置对应的第二实景图,以及在所述导航界面上显示所述第二实景图。
本说明书实施例的一个方面提供一种为用户提供实景图的装置,所述装置包括处理器以及存储器,所述存储器用于存储指令,所述处理器用于执行所述指令,实现如前任一项所述的为用户显示实景图的方法对应的操作。
本说明书实施例的一个方面提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,实现如前任一项所述的为用户显示实景图的方法对应的操作。
本说明书将以示例性实施例的方式进一步描述,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书的一些实施例所示的实景图提供系统的应用场景示意图;
图2是根据本说明书一些实施例所示的一种示例性计算设备的示意图;
图3是根据本说明书一些实施例所示的移动设备的示例性硬件和/或软件的示意图;
图4是根据本说明书的一些实施例所示的为用户提供实景图的系统 的模块图;
图5是根据本说明书一些实施例所示的为用户提供实景图的方法的流程图;
图6a是根据本说明书一些实施例所示的在导航界面显示第一实景图的示例性示意图;图6b是根据本说明书一些实施例所示的在导航界面显示第一实景图的另一示例性示意图;
图7是根据本说明书一些实施例所示的在导航界面上显示至少一个第一实景图中每一个的流程图;
图8是根据本说明书一些实施例所示的第一实景图的显示方向和角度的示例性示意图;
图9是根据本说明书一些实施例所示的基于显示方向和/或角度在导航界面上显示第二实景图的方法的流程图;
图10是根据本说明书一些实施例所示的第二实景图的显示方向和角度的示例性示意图;
图11是根据本说明书一些实施例所示的基于缩略或放大参数在导航界面上显示第二实景图的方法的另一流程图;
图12a是根据本说明书一些实施例所示的基于显示方向和/或角度在导航界面上显示第二实景图的示例性示意图;图12b是根据本说明书一些实施例所示的基于显示方向和/或角度在导航界面上显示第二实景图的另一示例性示意图;
图13是根据本说明书一些实施例所示的为用户提供实景图的方法的另一流程图;
图14是根据本说明书一些实施例所示的确定目标实景点坐标的示意图;
图15a是根据本说明书一些实施例所示的导航界面的示例性示意图; 图15b是根据本说明书一些实施例所示的导航界面的另一示例性示意图;
图16是根据本说明书一些实施例所示的为用户提供实景图的方法的另一流程图。
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本说明书中所使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
目前,用户通常根据导航路线的引导来寻找目标位置,但是若仅根据预先规划的导航路线行驶,在GNSS(Global Navigation Satellite System,全球导航卫星系统)信号弱、用户对目标位置不熟悉、目标位置附近路况 复杂等情况下,可能会导致过点偏航,从而导致用户长时间在目标位置周围行驶却无法准确找到目标位置。例如,在网约车应用领域,乘客需要通过终端导航界面到达上车点等候网约车,若存在目标位置附近路况复杂等情况使得乘客无法准确识别上车点,从而导致无法与网约车碰面,进而导致任务处理效率较低,严重影响用户体验。
图1是根据本说明书的一些实施例所示的实景图提供系统的应用场景示意图。
实景图提供系统100可以应用于地图服务系统、导航系统、运输系统、交通服务系统等。例如,实景图提供系统100可以应用于提供互联网服务的线上服务平台。在一些实施例中,实景图提供系统100可以应用于网约车服务,例如出租车呼叫、快车呼叫、专车呼叫、小巴呼叫、拼车、公交服务、司机雇佣和接送服务等。在一些实施例中,实景图提供系统100还可以应用于代驾服务、快递、外卖等。
实景图提供系统100可以是一个线上服务平台,包括服务器110、网络120、终端130以及数据库140。该服务器110可以包含处理设备112。
服务器110可以用于处理与为用户提供实景图相关的信息和/或数据。服务器110可以是独立的服务器或者服务器组。该服务器组可以是集中式的或者分布式的(如:服务器110可以是分布系统)。在一些实施例中,服务器110可以是本地的或者远程的。例如,服务器110可通过网络120访问存储于终端130、数据库140中的信息和/或资料。在一些实施例中,服务器110可直接与终端130、数据库140连接以访问存储于其中的信息和/或资料。在一些实施例中,服务器110可在云平台上执行。例如,该云平台可包括私有云、公共云、混合云、社区云、分散式云、内部云等中的一种或其任意组合。
在一些实施例中,服务器110可以包括处理设备112。处理设备112 可以处理与为用户提供实景图相关的数据和/或信息以执行一个或多个本说明书中描述的功能。例如,处理设备112可以基于候选位置,确定与候选位置对应的至少一个第一实景图。又例如,处理设备112可以响应用户对至少一个第一实景图中一个或多个的确认操作,将得到确认的第一实景图对应的候选位置确定为目标位置。又例如,处理设备112可以基于目标位置,确定目标实景点。又例如,处理设备112可以基于目标实景点,确定目标位置对应的第二实景图,以及在导航界面上显示第二实景图。在一些实施例中,处理设备112可包括一个或多个子处理设备(例如,单芯处理设备或多核多芯处理设备)。仅仅作为示例,处理设备112可包括中央处理器(CPU)、专用集成电路(ASIC)、专用指令处理器(ASIP)、图形处理器(GPU)、物理处理器(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编辑逻辑电路(PLD)、控制器、微控制器单元、精简指令集电脑(RISC)、微处理器等或其任意组合。
网络120可促进数据和/或信息的交换,数据或信息可以包括数据库140中包括的实景图(例如第一实景图和第二实景图)、服务器110中确定的目标位置、目标实景点等。在一些实施例中,实景图提供系统100中的一个或多个组件(例如,服务器110、终端130、数据库140)可通过网络120发送数据和/或信息至实景图提供系统100中的其他组件。在一些实施例中,网络120可是任意类型的有线或无线网络。例如,网络120可包括缆线网络、有线网络、光纤网络、电信网络、内部网络、网际网络、区域网络(LAN)、广域网络(WAN)、无线区域网络(WLAN)、都会区域网络(MAN)、公共电话交换网络(PSTN)、蓝牙网络、ZigBee网络、近场通讯(NFC)网络等或其任意组合。在一些实施例中,网络120可包括一个或多个网络进出点。例如,网络120可包括有线或无线网络进出点,如基站和/或网际网络交换点120-1、120-2、…,通过这些进出点,实景图提供 系统100的一个或多个组件可连接到网络120上以交换数据和/或信息。
在一些实施例中,终端130的用户可以是服务提供者。例如,服务提供者可以是网约车司机、外卖送餐员、快递员等等。在一些实施例中,终端130的用户也可以是服务使用者,例如,服务使用者可以包括地图服务使用者、导航服务使用者、运输服务使用者等。示例的,服务使用者可以是网约车的乘客。在一些实施例中,终端130可包括移动装置130-1、平板电脑130-2、笔记本电脑130-3、机动车内建装置(未示出)等中的一种或其任意组合。在一些实施例中,移动装置130-1可包括可穿戴装置、智能行动装置、虚拟实境装置、增强实境装置等或其任意组合。在一些实施例中,可穿戴装置可包括智能手环、智能鞋袜、智能眼镜、智能头盔、智能手表、智能衣物、智能背包、智能配饰等或其任意组合。在一些实施例中,智能行动装置可包括智能电话、个人数字助理(PDA)、游戏装置、导航装置、POS装置等或其任意组合。在一些实施例中,虚拟实境装置和/或增强实境装置可包括虚拟实境头盔、虚拟实境眼镜、虚拟实境眼罩、增强实境头盔、增强实境眼镜、增强实境眼罩等或以上任意组合。在一些实施例中,机动车内建装置可以包括车载导航仪、车载定位仪、行车记录仪等或其任意组合。在一些实施例中,终端130可包括具有定位功能的装置,以确定用户和/或终端130的位置。在一些实施例中,终端130可包括具有界面显示的装置,以为终端的用户显示实景图。在一些实施例中,终端130可包括具有输入功能的装置,以用户输入目标位置。
数据库140可存储资料和/或指令。在一些实施例中,数据库140可存储从终端130获取的资料。在一些实施例中,数据库140可存储供服务器110执行或使用的信息和/或指令,以执行本说明书中描述的示例性方法。在一些实施例中,数据库140可以存储实景点(即实景点的位置信息(如,经纬度坐标))、实景点对应的实景图或实景图显示角度或方向或矫正算 法等。在一些实施例中,数据库140可包括大容量存储器、可移动存储器、挥发性读写存储器(例如,随机存取存储器RAM)、只读存储器(ROM)等或以上任意组合。在一些实施例中,数据库140可在云平台上实现。例如,该云平台可包括私有云、公共云、混合云、社区云、分散式云、内部云等或以上任意组合。
在一些实施例中,数据库140可与网络120连接以与系统100的一个或多个组件(例如,服务器110、终端130等)通讯。系统100的一个或多个组件可通过网络120访问存储于数据库140中的资料或指令。例如,服务器110可以从数据库140中是实景点或实景点对应的实景图等并进行相应处理。在一些实施例中,数据库140可直接与系统100中的一个或多个组件(如,服务器110、终端130)连接或通讯。在一些实施例中,数据库140可以是服务器110的一部分。
图2是根据本说明书一些实施例所示的一种示例性计算设备的示意图。
在一些实施例中,服务器110和/或终端130可以在计算设备200上实现。例如,处理设备112可以在计算设备200上实施并执行本说明书所公开的处理设备112的功能。如图2所示,计算设备200可以包括总线210、处理器220、只读存储器230、随机存储器240、通信端口250、输入/输出接口260和硬盘270。
处理器220可以执行计算指令(程序代码)并执行本说明书描述的实景图提供系统100的功能。所述计算指令可以包括程序、对象、组件、数据结构、过程、模块和功能(所述功能指本说明书中描述的特定功能)。例如,处理器220可以处理从实景图提供系统100的其他任何组件获取的图像或文本数据。在一些实施例中,处理器220可以包括微控制器、微处理器、精简指令集计算机(RISC)、专用集成电路(ASIC)、应用特定指 令集处理器(ASIP)、中央处理器(CPU)、图形处理单元(GPU)、物理处理单元(PPU)、微控制器单元、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、高级RISC机(ARM)、可编程逻辑器件以及能够执行一个或多个功能的任何电路和处理器等,或其任意组合。仅为了说明,图2中的计算设备200只描述了一个处理器,但需要注意的是,本说明书中的计算设备200还可以包括多个处理器。
计算设备200的存储器(例如,只读存储器(ROM)230、随机存储器(RAM)240、硬盘270等)可以存储从实景图提供系统100的任何其他组件获取的数据/信息。示例性的ROM可以包括掩模ROM(MROM)、可编程ROM(PROM)、可擦除可编程ROM(PEROM)、电可擦除可编程ROM(EEPROM)、光盘ROM(CD-ROM)和数字通用盘ROM等。示例性的RAM可以包括动态RAM(DRAM)、双倍速率同步动态RAM(DDR SDRAM)、静态RAM(SRAM)、晶闸管RAM(T-RAM)和零电容(Z-RAM)等。
输入/输出接口260可以用于输入或输出信号、数据或信息。在一些实施例中,输入/输出接口260可以使用户与实景图提供系统100进行联系。在一些实施例中,输入/输出接口260可以包括输入装置和输出装置。示例性输入装置可以包括键盘、鼠标、触摸屏和麦克风等,或其任意组合。示例性输出装置可以包括显示设备、扬声器、打印机、投影仪等或其任意组合。示例性显示装置可以包括液晶显示器(LCD)、基于发光二极管(LED)的显示器、平板显示器、曲面显示器、电视设备、阴极射线管(CRT)等或其任意组合。通信端口250可以连接到网络以便数据通信。所述连接可以是有线连接、无线连接或两者的组合。有线连接可以包括电缆、光缆或电话线等或其任意组合。无线连接可以包括蓝牙、Wi-Fi、WiMax、WLAN、ZigBee、移动网络(例如,3G、4G或5G等)等或其任意组合。在一些实施例中, 通信端口250可以是标准化端口,如RS232、RS485等。在一些实施例中,通信端口250可以是专门设计的端口。
图3是根据本说明书一些实施例所示的移动设备的示例性硬件和/或软件的示意图。
如图3所示,移动设备300可以包括通信单元310、显示单元320、图形处理器(GPU)330、中央处理器(CPU)340、输入/输出单元350、内存360、存储单元370等。在一些实施例中,操作系统361(例如,iOS、Android、Windows Phone等)和应用程序362可以从存储单元370加载到内存360中,以便由CPU 340执行。应用程序362可以包括浏览器或用于从实景图提供系统100接收文字、图像、音频或其他相关信息的应用程序。
为了实现在本说明书中描述的各种模块、单元及其功能,计算设备或移动设备可以用作本说明书所描述的一个或多个组件的硬件平台。这些计算机或移动设备的硬件元件、操作系统和编程语言本质上是常规的,并且本领域技术人员熟悉这些技术后可将这些技术适应于本说明书所描述的实景图提供系统。具有用户界面元件的计算机可以用于实现个人计算机(PC)或其他类型的工作站或终端设备,如果适当地编程,计算机也可以充当服务器。
图4是根据本说明书的一些实施例所示的为用户提供实景图的系统的模块图。如图4所示,为用户提供实景图的系统(如处理设备112)可以包括第一显示模块410、第一确定模块420、第二确定模块430以及第二显示模块440。
第一显示模块410可以用于基于候选位置,确定与所述候选位置对应的至少一个第一实景图,以及在与用户相关的导航界面上显示所述至少一个第一实景图。
在一些实施例中,第一显示模块410还用于:基于所述候选位置, 确定至少一个第一候选实景点;基于所述至少一个第一候选实景点,获取所述至少一个第一实景图。
在一些实施例中,第一显示模块410还用于:对于所述至少一个第一实景图中的每一个,基于所述第一实景图对应的第一候选实景点和所述候选位置,确定所述第一实景图的显示方向和/或角度;基于所述显示方向和/或角度,在所述导航界面上显示所述第一实景图。
在一些实施例中,第一显示模块410还用于:在所述导航界面上显示所述第一实景图之前,对所述第一实景图进行预处理;其中,所述预处理包括:缩略、调整分辨率、调整亮度和调整饱和度中的一种或多种的组合。
第一确定模块420可以用于响应用户对所述至少一个第一实景图中一个或多个的确认操作,将得到确认的所述第一实景图对应的所述候选位置确定为目标位置。
第二确定模块430可以用于至少基于所述目标位置,确定目标实景点。
在一些实施例中,第二确定模块430还可以用于:基于所述目标位置,获取至少一个第二候选实景点;基于所述目标位置和所述第二候选实景点,判断预设条件是否被满足;响应于所述预设条件被满足,从所述预设条件被满足对应的第二候选实景点中确定所述目标实景点。
在一些实施例中,第二确定模块430还可以用于:判断所述目标位置与所述第二候选实景点之间的距离是否小于预设阈值。
在一些实施例中,第二确定模块430还可以用于:基于所述目标位置,获取至少一个第二候选实景点;基于预定车辆的定位和运动方向,以及所述目标位置,确定所述目标实景点;其中,所述预定车辆为前往所述目标位置与所述用户碰面的车辆。
在一些实施例中,第二确定模块430还可以用于:基于所述目标位置,获取至少一个待矫正第二候选实景点;使用矫正算法对所述至少一个待矫正第二候选实景点进行矫正,获取所述至少一个第二候选实景点。
第二显示模块440可以用于至少基于所述目标实景点,确定所述目标位置对应的第二实景图,以及在所述导航界面上显示所述第二实景图。
在一些实施例中,第二显示模块440还可以用于:基于所述目标实景点和所述目标位置,确定所述第二实景图的显示方向和/或角度;基于所述显示方向和/或角度,在所述导航界面上显示所述第二实景图。
在一些实施例中,第二显示模块440还可以用于:基于所述目标实景点和所述目标位置,确定第一方向和/或或第一角度;将所述第一方向和/或所述第一角度,或基于所述第一方向和/或所述第一角度旋转180°得到的所述第二方向和/或所述第二角度,作为在所述导航界面上显示所述第二实景图的方向和/或角度。
在一些实施例中,第二显示模块440还可以用于:响应于显示方向和/或角度的切换指令,根据切换后的显示角度和/或方向在所述导航界面显示所述第二实景图。
在一些实施例中,第二显示模块440还可以用于:基于用户当前位置和所述目标位置,确定所述第二实景图的缩略或放大参数;基于所述缩略或放大参数,在所述导航界面上显示所述第二实景图。
应当理解,图4所示的系统及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或 DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本说明书的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于为用户提供实景图的系统及其模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。例如,图4中披露的第一显示模块410、第一确定模块420、第二确定模块430以及第二显示模块440可以是一个系统中的不同模块,也可以是一个模块实现上述的两个模块的功能。又例如,为用户提供实景图的系统中各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。诸如此类的变形,均在本说明书的保护范围之内。
图5是根据本说明书一些实施例所示的为用户提供实景图的方法的流程图。如图5所示,流程500包括下述步骤。在一些实施例中,流程500可以由处理设备(例如,处理设备112)执行。
步骤510,基于候选位置,确定与候选位置对应的至少一个第一实景图,以及在与用户相关的导航界面上显示至少一个第一实景图。在一些实施例中,该步骤510可以由第一显示模块410执行。
候选位置是指用户需要前往的目标位置的备选。以共享出行为例,用户(即,乘客)在确定上车点时,可能会查看多个候选位置以确定目标位置(即,目标上车点)。用户可以是任意使用地图或导航的用户。例如,共享出行领域中的乘客等。在一些实施例中,用户可以在用户终端手动输入 或语音输入等方式输入候选位置,也可以从推荐列表中选择候选位置,也可以拖动定位图标确定候选位置。本实施例不对获取候选位置做出限制。例如,第一显示模块410可以直接从存储器中读取候选位置。
在一些实施例中,候选位置可以包括位置信息。在一些实施例中,位置信息可以包括但不限于名称信息和坐标信息。其中,坐标信息可以包括经纬度坐标信息,例如,GNSS坐标或GPS(Global Positioning System,全球定位系统)坐标等。本说明书中的一些实施例中基于候选位置进行处理(例如,确定距离、位置关系或获取图像等),实际可以基于候选位置的位置信息进行处理。在本说明书中,某些具体的名词可以包括于此名词相关的信息。例如,实景点、第一候选实景点、第二候选实景点、目标实景点或目标位置、用户当前位置、实景图等。相应的,在一些实施例中,涉及具体名词的操作,实际是针对与其相关的信息进行操作,后续不再赘述。
候选位置对应的第一实景图是指可以代表候选位置周围环境的图。其中,第一实景图中的图像内容可以完全或部分与候选位置周围环境相同。
在一些实施例中,第一显示模块410可以基于候选位置和历史数据获取第一实景图。例如,第一显示模块410可以从存储设备(例如,数据库140)中获取历史数据,将历史数据中针对候选位置显示的图像作为第一实景图。示例的,将历史数据中,针对候选位置,基于显示给其他用户的图像确定第一实景图。
在一些实施例中,第一显示模块410可以基于与候选位置相关的或与候选位置之间关系满足要求的实景点,获取第一实景图。
实景点是指拍摄现实环境(例如,城市、商圈或街道等环境)的实际拍摄点或实际采集点。在一些实施例中,实景点为道路上的点。例如,在某条道路上,每隔一定距离(例如5米)确定一个实景点,并进行实景拍摄。将基于实景点拍摄的图像或者对拍摄的图象进行处理后得到图像称为实景 图。一般情况下,基于实景点拍摄实景图时,是以实景点向各个方向上采集图像,即,实景图可以是360°全景图像,其也可称为全景图。例如,实景图(或全景图)以实景点为中心由轴向和纵向绕其一周进行拍摄。又例如,实景图(或全景图)可以预先通过全景采集车在路网上不断采集处理后获得。又例如,实景图(或全景图)采用全景摄像机在对应的实景点采集的图像等。
实景点及其对应的实景图可以从预定地图服务中获取,例如,通过API接口调用预定地图服务获取等。实景点及其对应的实景图还可以直接从存储设备(例如,数据库140)中获取。在终端或服务器中嵌入多个实景点坐标与其对应的全景图,即实景图还可以从终端或服务器中直接读取。获取实景点及其对应的实景图还可以通过其他方式,本实施例不做限制。
在一些实施例中,第一显示模块410可以基于候选位置,确定至少一个第一候选实景点;基于至少一个第一候选实景点,获取至少一个第一实景图。
第一候选实景点是指与候选位置相关的实景点。例如,第一候选实景点是指与候选位置之间关系满足预设要求的实景点。在一些实施例中,预设要求可以包括但不限于:第一候选实景点与候选位置之间的距离小于阈值、距离最小和第一候选实景点与候选位置之间不存在遮挡物(例如,建筑或施工场地等)等中的一种或多种。可以理解的,基于预设要求,第一候选实景点可以是一个,也可以是多个。在一些情况下,通过上述预设要求,可以使得显示的第一实景图中候选位置的相关内容足够多,便于用户根据第一实景图确定候选位置是否为目标位置。
如前所述,实景图是基于实景点拍摄得到,即,实景点存在对应的实景图,例如,实景图与实景点的位置信息对应。确定了至少一个第一候选实景点的基础上,可以获取对应至少一个第一候选实景点的实景图,并基 于至少一个第一候选实景点的实景图确定候选位置对应的至少一个第一实景图。例如,直接将第一候选实景点的实景图作为第一实景图。又例如,对第一候选实景点的实景图进行处理,将处理后得到的实景图作为第一实景图。其中,处理包括但不限于:放大、缩小、裁剪、调节分辨率、调节饱和度或调节亮度等图像处理手段。
在一些实施例中,第一显示模块410可以将第一候选实景点的实景图或第一候选实景点的实景图被处理后得到的实景图在特定视角角度下的部分图像作为第一实景图,该特定视角角度下的图像包含候选位置。
在一些实施例中,第一显示模块410可以将确定的至少一个第一实景图中的一个或多个在与用户相关的导航界面上显示。例如,若确定的第一候选实景点有多个,则随机选择一个或者与候选位置距离最小的第一实景点的实景图,并基于该实景图进行显示。又例如,基于所有的第一候选实景点的实景图进行显示等。在一些实施例中,在导航界面上展示第一实景图可以是第一候选实景点周围的部分或全部景象,即,展示的是第一实景图可以是第一候选实景点的实景图的部分或全部内容。
第一显示模块410可以获取候选位置对应的实景缩略图,可以理解的,将候选位置对应的实景缩略图作为候选位置对应的第一实景图,以及在导航界面上显示实景缩略图。其中,实景缩略图用于展示候选位置周围街景的缩略图。例如,通过调用预定地图服务获取与该候选位置距离最近的实景点的坐标,根据该实景点的坐标确定实景缩略图。其中,该实景缩略图为以该实景点为采集点采集获取的街景图像的缩略图。
在一些实施例中,通过API接口调用地图服务,获取地图服务根据候选位置返回的实景点坐标及其对应的全景图,对该全景图进行图像处理确定对应的实景缩略图。
在一些实施例中,也可以在终端或服务器的存储器中存储多个实景 点坐标与其对应的全景图,以根据候选位置对应的第一候选实景点坐标确定对应的实景缩略图。
关于在与用户相关的导航界面上显示第一实景图的更多细节参见图7及其相关描述。
步骤520,响应用户对至少一个第一实景图中一个或多个的确认操作,将得到确认的第一实景图对应的候选位置确定为目标位置。在一些实施例中,该步骤520可以由第一确定模块420执行。
如前所述,目标位置为用户希望前往的位置。以共享出行为例,可以将用户确定的上车点确定为目标位置。乘客通过乘客终端下单时,需要确定订单的起点和终点。在实际场景中,订单起点往往不是乘客终端的当前位置,而是由用户确定的便于上车的位置。例如,用户在写字楼内下单,将上车点确定为距离写字楼最近的路口等。
确认操作可以是用户认为显示的第一实景图与其希望前往的位置(即,目标位置)相符或部分相符而发起的操作。可以理解的,用户的操作生成系统的指令。在一些实施例中,确认操作可以是用户能够通过用户终端实现的操作,包括但不限于:点击确认、语音确认或触发确认等。其中,点击确认可以是用户点击界面上展示的第一实景图对应的确认按钮(若显示的第一实景图有多个,则确认按钮也对应设置),或者是直接点击第一实景图等实现。触发确认操作可以与应用场景相关,以共享出行为例,则触发确认操作可以包括乘客发起乘车订单操作,或输入乘车前往的目的地等。确认操作还可以包括其他类型,本实施例不做限制。例如,用户完成指定动作(如,点头或摇头等)等。
在一些实施例中,第一确定模块420可以基于更换触发条件,重新获取新的候选位置,或重新获取新的第一实景图。更换触发条件包括:用户发起的更换候选位置的操作、更换第一实景图的操作或其他触发条件(例 如,用户在预设时长内未进行确认操作等)。关于重新获取新的候选位置或新的第一实景图,与步骤510中描述的获取候选位置和第一实景图类似。示例的,在乘客对当前选择的候选位置不满意时,可以通过拖动导航界面的候选位置指示标志或者候选位置列表中的其他候选位置来进行候选位置的更换。
示例的,如图6a所示,以网约车应用场景为例,乘客的终端导航界面包括乘客终端的当前位置61、用户当前确定的候选位置62以及根据候选位置62获取的实景缩略图63。其中,当用户选择候选位置62时,确定距离候选位置62最近的实景点坐标,并获取该实景点坐标对应的全景图,对全景图进行处理获得实景缩略图63,在乘客的终端导航界面渲染显示实景缩略图63。乘客可以根据实景缩略图63确定是否将该候选位置62确定为目标位置,假设乘客不满候选位置62,可以通过拖动导航界面的候选位置指示标志或者候选位置列表中的其他候选位置发送位置更换指令。
如图6b所示,假设乘客将候选位置指示标志拖动至候选位置64处,获取候选位置64的坐标和距离候选位置64最近的实景点坐标,并获取该实景点坐标对应的全景图,对全景图进行处理获得实景缩略图65,在乘客的终端导航界面渲染显示实景缩略图65,此时若接收到确定指令,将当前的候选位置64确定为目标位置。
本实施例通过在导航界面上显示候选位置对应的实景图(或实景缩略图),使得乘客可以根据实景图(或实景缩略图)确定目标位置,由此可以使得该目标位置便于到达。同时,在一些情况下,若为用户展示的是实景缩略图,可以方便用户查看地图上其他信息。以共享出行为例,在乘客选择候选上车点后,确定该候选上车点对应的实景缩略图,并将该实景缩略图显示在乘客终端的导航界面上,以便乘客确定是否将该候选上车点确定为目标上车点。
步骤530,至少基于所述目标位置,确定目标实景点。在一些实施例中,该步骤530可以由第二确定模块430执行。
在一些实施例中,第二确定模块430可以基于目标位置,确定与目标位置之间关系满足预设条件的目标实景点。
在一些实施例中,第二确定模块430可以从存储设备中读取实景点,或调用地图服务获取实景点,判断读取或获取的实景点是否与目标位置之间的关系满足预设条件。
在一些实施例中,第二确定模块430基于目标位置,获取至少一个第二候选实景点;基于目标位置和第二候选实景点,判断预设条件是否被满足;响应于预设条件被满足,从预设条件被满足对应的第二候选实景点中确定目标实景点。
第二候选实景点是指与目标位置相关的实景点。在一些实施例中,第二候选实景点可以是目标位置周围的实景点。例如,第二候选实景点可以是以目标位置为中心,设定距离(例如,5m、10m等)为半径,确定的区域范围内的实景点。在一些实施例中,第二候选实景点可以是目标位置周围,且对应的实景图被用户反馈好评的实景点等。第二候选实景点可以是与目标位置的距离排名在Top n(n可以是3、5、10等正整数)的实景点,其中,排名是按距离由近到远的顺序。在一些实施例中,第二候选实景点可以是目标位置周围与用户画像有一定符合程度的实景点。其中,用户画像是对用户的描述,包括,用性别、年龄、职业或喜好等。例如,用户爱好购物,其对商场敏感,可以将目标位置周围的实景点中,与商场距离较近的实景点作为第二候选实景点。在一些实施例中,第二确定模块430可以从存储设备中读取第二候选实景点,或调用地图服务获取第二候选实景点。
在一些实施例中,预设条件可以是与目标位置之间的距离小于预设 阈值(例如,2m、3m等)。在一些实施例中,预设条件可以是所有采集的实景点中或至少一个第二候选实景点中,与目标位置之间距离最小。例如,对所有实景点或至少一个第二候选实景点和目标位置之间的距离进行由近到远排序,将top1作为目标实景点。通过上述与距离相关预设条件,可以尽量确定与目标位置的周围环境接近的目标实景点,使得用户看到的目标位置周围的实际环境与显示的第二实景图(关于第二实景图的显示见步骤540)更接近,便于用户确认是否达到目标位置。而且,若用户需要通过前往目标实景点观察确认是否达到目标位置时,上述与距离相关的预设条件,可以尽量缩短用户确认的时间。
在一些实施例中,在判断预设条件是否被满足时,除了考虑目标位置和第二候选实景点(或实景点)相关以外,还可以考虑用户相关的信息。其中,用户相关的信息包括但不限于:用户当前位置、用户运动方向和/或用户画像等。例如,将以目标位置为起点,实景点或第二候选实景点为终点构成的方向,与用户运动方向的夹角是否小于预设角度阈值(例如,20°、50°、75°、90°等),小于则预设条件被满足,反之则不被满足。可以理解的,以目标位置为起点,实景点或第二候选实景点为终点构成的方向,与用户运动方向的夹角是否小于预设角度阈值考虑。通过该方式(即,基于用户运动方向相关的夹角)确定目标位置,并将该目标实景点的实景图或对目标实景点的实景图处理之后得到的实景图作为目标位置对应的实景图,显示给用户,使得用户在运动过程中,无需向后转身观察当前实际环境是否与显示的实景图接近,即,在确保用户的安全的情况下,帮助用户寻找目标位置,或确定当前路线是否有偏航等。
在一些实施例中,随着用户的运动,可以是实时或每隔预定时间(例如,1min等),基于更新后的用户运动方向,更新目标实景点。
在一些实施例中,在确定目标实景点时,还可以考虑预定车辆的信 息。预定车辆的信息包括但不限于:预定车辆的当前位置、预定车辆的行驶方向等。
在一些实施例中,可以基于预定车辆的运动方向、第二候选实景点、以及目标位置,从至少一个第二候选实景点中确定目标实景点。具体的,基于第二候选实景点和目标位置确定的方向,与预定车辆运动方向的夹角,判断预设条件是否被满足,并从满足预设条件对应的第二候选实景点中确定目标实景点。例如,判断以第二候选实景点为起点目标位置为终点确定的方向与预定车辆运动方向的夹角是否小于或等于阈值角度,是则预设条件被满足。可选的,角度阈值可以是小于或等于90°。
在一些实施例中,可以基于预定车辆的当前位置、第二候选实景点、以及目标位置,判断预设条件是否被满足,并从满足预设条件对应的第二候选实景点中确定目标实景点。例如,判断第二候选实景点是否位于预定车辆的当前位置的前方,且位于目标位置的后方,是则预设条件被满足。
在一些实施例中,当满足预设条件对应的第二候选实景点为多个时,可以将与目标位置距离最小的第二候选实景点作为目标实景点。
可以理解的,为了更方便预定车辆和用户碰面(例如,预定车辆可以是共享出行中接单司机行驶的车辆),双方显示的实景图相同。车辆在行驶过程中,安全视角可以是前方0°-180°,为了保证车辆司机安全驾驶,同时,展示的实景图尽量与目标位置的真实环境接近,在确定目标实景点时,可以同时考虑车辆的相关信息和目标位置。
可以理解的,基于该方式(即,基于与预定车辆运动方向确定的夹角)确定目标实景点,并将基于目标实景点确定的第二实景图分别显示给预定车辆的司机和用户,可以在尽量为用户提供与目标位置接近的实景图的同时,使得预定车辆的司机通过向前方观测实际环境,便能基于显示的实景图发现目标位置。即,在不仅便于用户基于显示的实景图确定是否达 到目标位置的同时,也保证了预定车辆的司机的安全驾驶。
在一些实施例中,在确定目标实景点时,还可以考虑目标位置周围满足条件的车辆行驶道路的相关信息,例如,规定的车辆行驶方向、道路位置等。满足条件的车辆行驶道路是指用户步行至行驶道路的步行成本最小或小于预定值的车辆行驶道路,步行成本可以通过步行距离、是否需要跨路(即,过斑马线)等衡量。具体的,基于规定的车辆行驶方向、目标位置和第二候选实景点,判断预设条件是否被满足,并从满足预设条件对应的第二候选实景点中确定目标实景点。其中,判断方式或预设条件与上述基于预定车辆运动方向类似,不再赘述。
考虑到基于定位技术(例如,GPS或GNSS等定位系统)获取的位置信息(例如,坐标信息)进行地图匹配(将位置信息匹配到路网上)时可能会有误差,应该在路网上的位置可能不显示在路网上(例如,路旁边的房屋里或池塘里等),或者,通过API接口调用预定地图服务,获取该预定地图服务根据目标位置返回的实景点坐标,由于地图服务返回的坐标有时候可能不准确。在一些情况下,实景点坐标不准确的,可能会导致在后续确定实景图时不准确,进一步可能会导致向用户传递错误的信息,影响用户体验。
在一些实施例中,实景点、第一候选实景点、第二候选实景点和目标实景点是经过矫正后得到的实景点。在一些实施例中,从存储设备或地图服务等获取或读取实景点之后,对其进行矫正,矫正完之后再进行后续的步骤,例如,确定第一候选实景点、第二候选实景点和/或目标实景点等。
在一些实施例中,第二确定模块430可以基于目标位置,获取至少一个待矫正第二候选实景点;使用矫正算法对至少一个待矫正第二候选实景点进行矫正,获取至少一个第二候选实景点。待矫正第二候选实景点是指基于定位技术获取的实景点中,基于目标位置确定的实景点。例如,通 过API接口调用预定的地图服务,并获取预定的地图服务根据目标位置返回的实景点坐标。
矫正算法可以是指将待矫正的实景点(例如,待矫正第二候选实景点)关联到路网上的算法。例如,采用矫正算法可以将地图服务返回的GNSS或GPS坐标关联到地图的路网上,也即将GNSS或GPS下采样的坐标序列转换为路网坐标序列以对地图服务返回的坐标进行矫正。在一些实施例中,矫正算法可以是将待矫正的实景点关联到最近的路线上的算法,例如,将待矫正的实景点直接投影到最近的路线上,将投影点作为矫正后的实景点(例如,第二候选实景点)。路线是地图中的路网的组成部分,路线可以视为地图中的道路。在一些实施例中,矫正算法可以是隐马尔科夫模型(Hidden Markov Model,HMM)、ST-Matching算法或IVVM((An Interactive-Voting Based Map Matching Algorithm)算法等。
步骤540,至少基于目标实景点,确定目标位置对应的第二实景图,以及在导航界面上显示第二实景图。在一些实施例中,该步骤540可以由第二显示模块440执行。
如前所述,实景点存在对应的实景图。在一些实施例中,目标实景点被确定的基础上,第二显示模块440可以获取目标实景点的实景图,并基于目标实景点的实景图确定目标位置对应的第二实景图。例如,根据API接口调用预定地图服务,获取该预定地图服务根据目标实景点坐标返回的实景图(或全景图)。返回的实景图(或全景图)可以是以目标实景点为采集点向各个方向采集的图像或全景图像。又例如,直接从存储设备中读取目标实景点对应的实景图。
在一些实施例中,第二显示模块440可以直接将目标实景点的实景图作为目标位置对应的实景图。在一些实施例中,第二显示模块440在获取了目标实景点的实景图之后,可以对该实景图进行处理,并将处理后的 实景图作为目标位置对应的第二实景图。其中,处理包括但不限于:放大、缩小、裁剪、调节分辨率、调节饱和度或调节亮度等图像处理手段。
在一些实施例中,第二显示模块440可以将目标实景点的实景图或目标实景点的实景图被处理后得到的实景图在特定视角角度下的部分图像作为第二实景图。在一些实施例中,特定视觉视角可以基于预定车辆运动方向、目标位置和目标实景点确定。例如,将第一向量和第二向量的夹角确定为特定视觉角度,其中,第一向量由候选实景点和目标位置确定,第二向量由预定车辆运动方向确定。在一些实施例中,特定视觉视角可以基于目标位置和目标实景点确定。
在一些实施例中,在导航界面上显示的第二实景图可以是目标实景点的全部景象或部分景象,即,展示的是第二实景图可以是目标实景点的实景图的部分或全部内容。在展示第二实景图之前,可以对第二实景图进行预处理,例如视角调整、缩放或放大、调整分辨率等。关于视角调整的具体内容可以参见本说明书图9和图11及其相关内容。
在一些实施例中,向用户展示第二实景图的过程中还可以接收用户的操作指令,并根据用户的操作指令对展示给用户的第二实景图进行调整,例如,更换实景图的指令、实景图显示视角的调整指令、视距的调整指令、缩小放大指令等,从而根据用户需求为其提供相应的实景图。将第二实景图通过导航界面展示给用户,有助于引导用户前往目标位置,或者,帮助用户确认是否已经到达目标位置等。
图7是根据本说明书一些实施例所示的在导航界面上显示至少一个第一实景图中每一个的流程图。如图7所示,流程700包括下述步骤。在一些实施例中,流程700可以由处理设备(例如,处理设备112)的第一显示模块410执行。
步骤710,基于第一实景图对应的第一候选实景点和候选位置,确定 第一实景图的显示方向和/或角度。
实景图显示包含两个角度:水平视角和俯仰角。其中,水平视角可以是以水平地面为基础,建立的二维平面中,实景图像显示的角度。例如,建立的二维坐标系中,正北方向为0°,正南方向为180°。水平视角也可称为水平显示方向。俯仰角是指与水平地面的夹角。
实景点的实景图可以为360°全景图像,若将第一候选目标实景点的360°全景图像确定为候选位置对应的第一实景图,可以通过显示方向和/或角度显示第一实景图,便于用户结合显示的实景图进行观察,确定将候选位置作为目标位置。
在一些实施例中,第一实景图的水平显示方向可以是以第一实景图对应的第一候选实景点为起点,以候选位置为连接点形成的方向。
在一些实施例中,第一实景图的水平视角可以是以第一实景图对应的第一候选实景点为起点、候选位置为连接点形成的方向,与初始方向的夹角。初始方向是指地图服务对实景图进行展示默认或设定的初始方向(例如,正北方向),初始方向可以根据具体应用场景设置,本实施例对此不做限制。
在一些实施例中,俯仰角可以是默认值,例如0°等,或者基于用户选择确定。
步骤720,基于显示方向和/或角度,在与用户相关的导航界面上显示第一实景图。
在一些实施例中,第一显示模块410可以基于确定的水平显示方向,显示第一实景图。例如,将第一实景图从初始方向旋转至该确定的水平显示方向来显示第一实景图。
在一些实施例中,第一显示模块410可以基于确定的水平角度,显示第一实景图。例如,将实景图由初始方向旋转该确定的水平视角来显示 第一实景图。
在一些实施例中,在显示第一实景图之前,可以对第一实景图进行预处理。在一些实施例中,预处理可以包括:缩略、裁剪、调整分辨率、调整亮度和调整饱和度中的一种或多种的组合。
在一些实施例中,缩略可以是指对第一实景图进行缩小,经过缩略处理后的第一实景图可以称为实景缩略图。
在一些实施例中,在导航界面上展示第一实景图之后,用户可以对展示的第一实景图进行操作,从而根据用户需求为其提供相应的实景图,有助于方便用户确定目标位置。例如,旋转至其他方向或角度。又例如,对图像进行放大或缩小等,示例的,用户可以通过点击实景缩略图获得实景图。
如图8所示,C1为候选位置,B1和B2为第一候选实景点,a为显示实景图的初始方向。若将B1对应的实景图作为第一实景图进行显示,则将B1和C1确定方向b1作为水平显示方向,或者将b1与a的夹角θ1作为水平视角。在导航界面上显示时,将实景图从a旋转到b1,或者从a顺时针旋转θ1进行显示。若将B2对应的实景图作为第一实景图进行显示,则将B2和C1确定方向b2作为水平显示方向,或者将b2与a的夹角θ2作为水平视角。在导航界面上显示时,将实景图从a旋转到b2,或者从a顺时针旋转θ2进行显示。
图9是根据本说明书一些实施例所示的基于显示方向和/或角度在导航界面上显示第二实景图的方法的流程图。如图9所示,流程900包括下述步骤。在一些实施例中,流程900可以由处理设备(例如,处理设备112)的第二显示模块440执行。
步骤910,基于目标实景点和目标位置,确定第二实景图的显示方向和/或角度。
实景点的实景图可以为360°全景图像,将目标实景点的360°全景图像确定为目标位置对应的第二实景图,基于显示方向和/或角度显示第二实景图,便于用户结合显示的实景图进行观察,引导用户前往目标位置或确认是否到达目标位置。
在一些实施例中,显示第二实景图的水平显示方向可以是基于目标实景点和目标位置确定的第一方向。例如,第一方向基于是以目标实景点为起点,以目标位置为连接点形成的方向。又例如,第一方向基于是以目标位置为起点,以目标实景点为连接点形成的方向。在一些实施例中,第一实景图的显示方向可以是将第一方向旋转180°确定的第二方向。
在一些实施例中,第二实景图的水平角度可以是基于目标实景点和目标位置确定的第一夹角。例如,第一夹角为以目标实景点为起点、目标位置为连接点形成的方向,与实景图展示的初始方向的夹角。又例如,第一夹角为以目标位置为起点、目标实景点为连接点形成的方向,与实景图展示的初始方向的夹角。在一些实施例中,第二实景图的水平视角可以是第一夹角旋转180°确定的第二夹角。
在一些实施例中,俯仰角可以是默认值,例如,0°等,或者基于用户选择确定。
步骤920,基于显示方向和/或角度,在导航界面上显示第二实景图。
在一些实施例中,第二显示模块440可以基于确定的水平显示方向,显示第二实景图。例如,将第二实景图从初始方向旋转至该确定的水平显示方向来显示第二实景图。
在一些实施例中,第二显示模块440可以基于确定的水平角度,显示第二实景图。例如,将实景图由初始方向旋转该确定的水平视角来显示第二实景图。
在一些实施例中,在显示第二实景图之前,可以对第二实景图进行 预处理。在一些实施例中,预处理可以包括:放大、裁剪、调整分辨率、调整亮度和调整饱和度中的一种或多种的组合。可以理解的,显示放大后的实景图有助于用户对照显示的图像和当前实际所处环境进行对比。
在一些实施例中,第二显示模块440还可以响应于显示方向和/或角度的切换指令,根据切换后的显示角度和/或方向在导航界面显示第二实景图。在一些实施例中,切换指令可以包括多种与切换相关的用户操作发起,与切换相关的用户操作包括:点击切换、移动切换、旋转切换、手动输入切换或语音切换等。例如,点击切换可以是用户点击导航界面上展示的第二实景图中的任意一点,通过点击切换,导航界面可以切换至点击的点所在的方向或角度。移动切换可以是用户移动导航界面上展示的第二实景图或导航界面。旋转切换可以是用户旋转移动导航界面上展示的第二实景图,以展示旋转方向或旋转角度的第二实景图。手动输入切换或语音切换可以是手动输入或语音输入显示方向和/角度。
如图10所示,A1为目标实景点,L1为目标位置,a为实景图展示的初始方向。水平显示方向是以目标实景点A1为起点,以目标位置L1为连接点形成的方向r1,或水平视角可以是a与r1的夹角α,在导航界面上显示时,将实景图从a旋转到r1,或者从a旋转α进行显示。水平显示方向也可以是r1旋转180°得到的r2,水平视角也可以是r1旋转180°后得到的(180°+α)或η,在导航界面上显示时,将实景图从a旋转到r2,或者从a顺时针旋转(180°+α)或逆时针旋转η进行显示。
图11是根据本说明书一些实施例所示的基于缩略或放大参数在导航界面上显示第二实景图的方法的另一流程图。如图11所示,流程1100包括下述步骤。在一些实施例中,流程1100可以由处理设备(例如,处理设备112)的第二显示模块440执行。
步骤1110,基于用户当前位置和目标位置,确定第二实景图的缩略 或放大参数。
用户当前位置可以通过定位技术获取。在一些实施例中,用户当前位置可以通过用户终端或定位装置获取。用户终端可以为乘客移动终端,也可以为其他车载设备,用户当前位置也可以根据API接口调用对应的地图服务中获得,本实施例不做限制。
在一些实施例中,第二显示模块440可以基于用户当前位置和目标位置的距离,确定第二实景图的缩略或放大参数。例如,缩略的参数小大与距离大小成正比。又例如,放大的参数大小与距离大小成反比。可以理解的,缩略或放大参数是相对于原图大小而言的缩小参数或放大参数,参数可以是倍数、比例等。在一些实施例中,可以基于缩放算法,确定特定距离下的缩略或放大参数。
考虑用户在不同阶段,对导航的不同需求,对在页面显示的第二实景图进行缩放。在一些情况下,用户距离目标位置越远,用户在导航页面上的查看路线、路况等信息的需求可能更大,此时显示的第二实景图的大小可以在保证能够清楚显示的前提下进行缩略处理,或者以较小的放大倍数显示。用户距离目标位置越近,用户使用实景图的需求更大,可以将展示的实景图进行放大。
步骤1120,基于缩略或放大参数,在导航界面上显示第二实景图。
在一些实施例中,在确定缩略或放大参数之后,可以在导航界面对第二实景图进行显示。例如,第二显示模块440可以在显示的过程中,实时对第二实景图进行缩放,然后将缩放之后的第二实景图在导航界面上显示,或者,将缩放之后的第二实景图以一定的水平视角和俯仰角在导航界面上显示。又例如,第二显示模块440可以在存储设备中存储不同距离下对应缩略或放大参数的第二实景图,在显示的过程中,根据距离直接读取对应的第二实景图进行显示。
在一些实施例中,第二实景图的大小还可以根据用户的操作指令进行调整,从而根据用户需求来确定向用户展示的实景图的大小,确保用户的使用体验。
示例的,如图12a和图12b所示,目标位置为121,当用户当前位置为122时,用户当前位置距离目标位置较远,展示的第二实景图是123。当用户当前位置为124时,用户当前位置距离目标位置较近,展示的第二实景图是125。可以看出125比123图像更大,遮挡的信息更多。
在一些实施例中,显示模块还可以根据缩略或放大参数确定相应的图像预处理手段,以保证在导航界面上显示的图像清晰。例如,若放大倍数大于阈值(例如,1倍),对图像进行锐化处理,使图像的轮廓清晰。
图13是根据本说明书一些实施例所示的为用户提供实景图的方法的另一流程图。如图13所示,流程1300包括下述步骤。在一些实施例中,流程1300可以由处理设备(例如,处理设备112)执行。
步骤1310,确定目标位置。可选的,本实施例将用户确定的上车点确定为目标位置。在一些实施例中,该步骤可以由第一确定模块420执行。
步骤1320,响应于导航界面上的实景缩略图被操作,获取预定车辆运动方向和目标位置周围的第一候选实景点坐标。在一些实施例中,该步骤可以由第二确定模块430执行。
步骤1330,基于第二候选实景点坐标和预定车辆运动方向判断预定条件是否被满足,响应于满足,将第二候选实景点坐标确定为目标位置对应的目标实景点坐标。在一些实施例中,该步骤可以由第二确定模块430执行。
以网约车场景为例,预定车辆可以为处理任务的网约车司机的车辆。可以从网约车平台或服务器获取预定车辆运动方向,也可以根据API接口调用地图服务以获取预定车辆运动方向,也可以从预定车辆的终端(例如, 司机终端或其他车载设备等)获取预定车辆运动方向,本实施例对此不做限制。应理解的,响应于导航界面上的实景缩略图被操作,可以同时获取预定车辆运动方向和目标位置周围的第二候选实景点坐标,也可以先获取预定车辆运动方向,再获取目标位置周围的第二候选实景点坐标,或者先获取目标位置周围的第二候选实景点坐标,再获取预定车辆运动方向,本实施例对此不做限制。
在一些实施例中,根据第二候选实景点与目标位置之间的距离依次获取目标位置周围的第二候选实景点坐标,直至预定条件被满足。容易理解,实景点坐标距离目标位置越近,越便于根据实景点坐标对应的实景图确定目标位置。本实施例可以根据第二候选实景点坐标与目标位置之间的距离从近到远依次获取第二候选实景点坐标,直至找到满足预定条件的第二候选实景点坐标。
在一些实施例中,第二候选实景点坐标是经过矫正后的坐标。关于矫正具体参见步骤530。
在一些实施例中,预定条件为第一向量和第二向量的夹角小于或等于角度阈值。其中,第一向量由第二候选实景点坐标和目标位置确定,第二向量由预定车辆运动方向确定。在一些实施例中,第一向量的向量起点为第二候选实景点坐标,第二向量的向量起点为预定车辆当前位置,例如,角度阈值为90°。本实施例使得当预定车辆运动至目标实景点时,司机向前方观测便能够发现目标位置及其周围的街景,这提高了车辆行驶的安全性。应理解,本实施例并不对第一向量和第二向量的向量起点进行限制,角度阈值可以根据第一向量和第二向量的向量起点或向量终点进行确定。例如,终端运动方向可以为终端在导航路线中的各路段上的运动方向。
示例的,如图14所示,获取预定车辆运动方向e以及与目标位置L1最近的第二候选实景点A1的坐标。假设目标位置L1周围(例如以目标位 置L1为中心的第一阈值范围内)存在第二候选实景点A1-A3,其中,第二候选实景点A1-A3与目标位置L1之间的距离由近及远依次为第二候选实景点A1、第二候选实景点A2、第二候选实景点A3。
根据第二候选实景点A1的坐标和目标位置L1确定第一向量a1,根据终端运动方向e确定第二向量b,计算第一向量a1和第二向量b的夹角α,夹角α大于90°,当预定车辆运动至第二候选实景点A1时,司机需要向斜后方观测才能发现目标位置L1及其周围的街景,这对安全性会带来影响。为了保证司机驾驶的安全性和司乘之间的街景同步性,第二候选实景点A1不满足预定条件,之后获取与目标位置L1较近的第二候选实景点A2的坐标,根据第二候选实景点A2的坐标和目标位置L1确定第一向量a2,计算第一向量a2和第二向量b的夹角θ,夹角θ小于90°,由此,当终端运动至第二候选实景点A2时,司机向前方观测便能够发现目标位置L1及其周围的街景,可以将第二候选实景点A2的坐标确定为目标实景点坐标。
通过预定车辆的司机对应的视觉角度来确定目标位置对应的全景图,可以使得司机终端的导航界面和乘客终端的导航界面上显示的街景图由同一全景图确定,可以确保司机和乘客能够准确到达目标位置,不会由于导航地图上的街景图差异而造成偏差。
步骤1340,根据目标实景点坐标确定目标位置对应的全景图。在一些实施例中,该步骤可以由第二显示模块440执行。
在一些实施例中,在目标位置确认过程中,通过在导航界面上显示的实景缩略图简单地展示目标位置周围的部分信息以便于乘客确认目标位置,在目标位置确定后,若乘客需要基于目标位置周围的街景来确定目标位置,则可以对导航界面上的实景图进行操作,以使得导航界面展示全景图,从而可以准确引导乘客到达目标位置。
步骤1350,在导航界面显示全景图。在一些实施例中,该步骤可以 由第二显示模块440执行。
可选的,导航界面可以为乘客移动终端的导航界面。
在一些实施例中,响应于显示角度切换指令,根据更新后的显示角度在导航界面显示全景图。可选的,乘客可以在导航界面上旋转移动该全景图发送角度切换指令,以展示各个显示角度下的全景图。由此,乘客不仅可以以司机的角度观测目标位置周围的街景,还可以通过旋转、移动全景图来观测目标位置周围各个方向的街景,从而可以使得乘客能够在较为复杂的环境或者不熟悉的环境中准确识别目标位置。
示例的,如图15a和15b所示,在乘客确定目标位置并点击目标位置对应的实景缩略图后,在乘客移动终端的导航界面显示对应的全景图。如图15a所示,导航界面包括用户当前位置151、目标位置152、目标实景点153、初始显示角度下的全景图154、预定车辆位置155以及预定车辆的导航路线156。响应于目标位置对应的实景缩略图被操作,获取预定车辆运动方向以及目标位置对应的第二候选实景点坐标,根据预定终端运动方向和目标位置152从至少一个第二候选实景点坐标中确定目标实景点153的坐标,并获取目标实景点153对应全景图,根据预先设置的初始方向、以及预定车辆运动方向对应的第二向量确定显示角度,将全景图在该显示角度下的图像显示在导航界面中。
乘客可以在导航界面上旋转移动该全景图发送角度切换指令,以展示各个显示角度下的全景图。如图15b所示,对导航界面中的全景图154进行旋转或移动操作,以显示其他显示角度下的全景图157。
应理解,本发明实施例中的导航界面仅仅是示意性的,导航界面的信息可以根据具体应用场景进行设置,本实施例并不对此进行限制。
图16是根据本说明书一些实施例所示的为用户提供实景图的方法的另一流程图。如图16所示,流程图1600包括以下步骤:
步骤S1,确定候选位置。
步骤S2,获取候选位置对应的实景缩略图。
步骤S3,在导航界面显示该实景缩略图。
步骤S4,是否将当前候选位置确定为目标位置,若不是,执行步骤S1,也即重新确定候选位置,若是,执行步骤S5。可选的,响应于接收到的位置确定指令,将当前的候选位置确定为目标位置。响应于接收到的位置更换指令,重新确定候选位置。
步骤S5,响应于目标位置对应的实景缩略图被操作,获取预定车辆运动方向,并根据API接口调用预定的地图服务,获取返回的至少一个坐标。在一些实施例中,调用预定地图服务获取以目标位置为中心,以第一阈值为半径的范围内的实景点坐标。例如,第一阈值可以为50m。应理解,第一阈值可以根据实际应用场景确定,本实施例并不对此进行限制。
步骤S6,确定返回的至少一个坐标中距离目标位置最近的坐标。可选的,对预定的地图服务返回的至少一个坐标按照距离目标位置的距离由近及远进行排序,获取坐标序列,从坐标序列中确定距离目标位置最近的坐标。
步骤S7,根据矫正模型或算法对该坐标进行矫正,获取对应的第二候选实景点坐标。
步骤S8,根据第二候选实景点坐标和目标位置确定第一向量,根据预定车辆运动方向确定第二向量。
步骤S9,计算第一向量和第二向量的夹角。可选的,通过计算第一向量和第二向量的夹角余弦值确定夹角大小。
步骤S10,判断第一向量和第二向量的夹角是否小于或等于角度阈值,若第一向量和第二向量的夹角小于或等于角度阈值,执行步骤S12,若第一向量和第二向量的夹角大于角度阈值,执行步骤S11以及步骤S7-步 骤S10。可选的,角度阈值为90°。
步骤S11,获取下一个距离目标位置较近的坐标。可选的,从上述坐标序列中获取下一个距离目标位置较近的坐标。
应理解,本实施例以从预定的地图服务中获取以目标位置为中心,第一阈值范围内的所有实景点坐标为例进行描述,在其他可选的实现方式中,可以在调用预定地图服务时获取一个实景点坐标,在该实景点坐标不满足预定条件时,再次调用预定地图服务获取下一个距离目标位置较近的一个实景点坐标,直至获取满足预定条件的实景点坐标。同时,本实施例根据矫正模型或算法对预定地图服务返回的实景点坐标依次进行矫正,也即先对距离目标位置最近的实景点坐标进行矫正,在距离目标位置最近的选实景点坐标不满足条件后,再对下一个实景点坐标进行矫正。在其他可选的实现方式中,也可以根据矫正模型对预定地图服务返回的所有实景点坐标进行同时矫正,再对迭代执行步骤S7-步骤S9,本实施例并不对获取目标实景点坐标的步骤迭代过程进行限制。
步骤S12,将该第二候选实景点坐标确定为目标实景点坐标。
步骤S13,根据目标实景点坐标确定目标位置对应的全景图。
步骤S14,根据实景图(或全景图)展示的初始方向和第二向量的夹角确定显示角度。
步骤S15,根据显示角度在导航界面显示全景图。
步骤S16,响应于显示角度切换指令,根据更新后的显示角度在导航界面显示全景图。
步骤S17,接收重置指令,执行步骤S15,也即在接收重置指令后,根据更新前的显示角度在导航界面显示全景图。
应理解,本实施例并不对执行方法的终端进行限制,本发明实施例的上述各实施方式的方法步骤可以嵌入乘客移动终端,以通过乘客移动终 端执行上述各实施方式的方法步骤来实现本发明实施例信息交互过程。本发明实施例的上述各实施方式的方法步骤也可以存储至对应的服务器中,以通过服务器的处理器执行上述各实施方式的方法步骤,并将获取的实景缩略图和/或全景图发送至乘客移动终端进行显示或播报。
本说明书实施例还提供一种为用户提供实景图的装置,所述装置包括处理器以及存储器,所述存储器用于存储指令,所述处理器用于执行所述指令,实现如前任一项所述的为用户显示实景图的方法对应的操作。
本说明书实施例还提供一种计算机可读存储介质。所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机实现如前任一项所述的为用户显示实景图的方法对应的操作。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本说明书的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本说明书的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常 驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本说明书的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本说明书各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的 发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。
Claims (26)
- 一种为用户提供实景图的方法,其特征在于,包括:基于候选位置,确定与所述候选位置对应的至少一个第一实景图,以及在与用户相关的导航界面上显示所述至少一个第一实景图;响应用户对所述至少一个第一实景图中一个或多个的确认操作,将得到确认的所述第一实景图对应的所述候选位置确定为目标位置;至少基于所述目标位置,确定目标实景点;至少基于所述目标实景点,确定所述目标位置对应的第二实景图,以及在所述导航界面上显示所述第二实景图。
- 根据权利要求1所述的方法,其特征在于,所述基于候选位置,确定与所述候选位置对应的至少一个第一实景图,包括:基于所述候选位置,确定至少一个第一候选实景点;基于所述至少一个第一候选实景点,获取所述至少一个第一实景图。
- 根据权利要求2所述的方法,其特征在于,所述在与用户相关的导航界面上显示所述至少一个第一实景图,包括:对于所述至少一个第一实景图中的每一个,基于所述第一实景图对应的第一候选实景点和所述候选位置,确定所述第一实景图的显示方向和/或角度;基于所述显示方向和/或角度,在所述导航界面上显示所述第一实景图。
- 根据权利要求3所述的方法,其特征在于,还包括:在所述导航界面上显示所述第一实景图之前,对所述第一实景图进行预处理;其中,所述预处理包括:缩略、调整分辨率、调整亮度和调整饱和度中的一种或多种的组合。
- 根据权利要求1所述的方法,其特征在于,所述至少基于所述目标位置,确定目标实景点,包括:基于所述目标位置,获取至少一个第二候选实景点;基于所述目标位置和所述第二候选实景点,判断预设条件是否被满足;响应于所述预设条件被满足,从所述预设条件被满足对应的第二候选实景点中确定所述目标实景点。
- 根据权利要求5所述的方法,其特征在于,所述基于所述目标位置和所述第二候选实景点,确定预设条件是否被满足,包括:判断所述目标位置与所述第二候选实景点之间的距离是否小于预设阈值。
- 根据权利要求1所述的方法,其特征在于,所述至少基于所述目标位置,确定目标实景点,包括:基于所述目标位置,获取至少一个第二候选实景点;基于预定车辆的运动方向、所述第二候选实景点、以及所述目标位置,从所述至少一个第二候选实景点中确定所述目标实景点;其中,所述预定车辆为前往所述目标位置与所述用户碰面的车辆。
- 根据权利要求5所述的方法,其特征在于,所述基于所述目标位置,获取至少一个第二候选实景点,包括:基于所述目标位置,获取至少一个待矫正第二候选实景点;使用矫正算法对所述至少一个待矫正第二候选实景点进行矫正,获取所述至少一个第二候选实景点。
- 根据权利要求1所述的方法,其特征在于,所述在所述导航界面上显示所述第二实景图,包括:基于所述目标实景点和所述目标位置,确定所述第二实景图的显示方向和/或角度;基于所述显示方向和/或角度,在所述导航界面上显示所述第二实景图。
- 根据权利要求9所述的方法,其特征在于,所述基于所述目标实景点和所述目标位置,确定所述第二实景图的显示方向和/或角度,包括:基于所述目标实景点和所述目标位置,确定第一方向和/或第一角度;将所述第一方向和/或所述第一角度,或基于所述第一方向和/或所述第一角度旋转180°得到的所述第二方向和/或所述第二角度,作为在所述导航界面上显示所述第二实景图的方向和/或角度。
- 根据权利要求9所述的方法,其特征在于,还包括:响应于显示方向和/或角度的切换指令,根据切换后的显示角度和/或方向在所述导航界面显示所述第二实景图。
- 根据权利要求1所述的方法,所述在所述导航界面上显示所述第二实景图,包括:基于用户当前位置和所述目标位置,确定所述第二实景图的缩略或放大参数;基于所述缩略或放大参数,在所述导航界面上显示所述第二实景图。
- 一种为用户提供实景图的系统,其特征在于,包括:第一显示模块,用于基于候选位置,确定与所述候选位置对应的至少 一个第一实景图,以及在与用户相关的导航界面上显示所述至少一个第一实景图;第一确定模块,用于响应用户对所述至少一个第一实景图中一个或多个的确认操作,将得到确认的所述第一实景图对应的所述候选位置确定为目标位置;第二确定模块,用于至少基于所述目标位置,确定目标实景点;第二显示模块,至少基于所述目标实景点,确定所述目标位置对应的第二实景图,以及在所述导航界面上显示所述第二实景图。
- 根据权利要求13所述的系统,其特征在于,所述第一显示模块还用于:基于所述候选位置,确定至少一个第一候选实景点;基于所述至少一个第一候选实景点,获取所述至少一个第一实景图。
- 根据权利要求14所述的系统,其特征在于,所述第一显示模块还用于:对于所述至少一个第一实景图中的每一个,基于所述第一实景图对应的第一候选实景点和所述候选位置,确定所述第一实景图的显示方向和/或角度;基于所述显示方向和/或角度,在所述导航界面上显示所述第一实景图。
- 根据权利要求15所述的系统,其特征在于,所述第一显示模块还用于:在所述导航界面上显示所述第一实景图之前,对所述第一实景图进行预处理;其中,所述预处理包括:缩略、调整分辨率、调整亮度和调整饱和度中的一种或多种的组合。
- 根据权利要求13所述的系统,其特征在于,所述第二确定模块还用于:基于所述目标位置,获取至少一个第二候选实景点;基于所述目标位置和所述第二候选实景点,判断预设条件是否被满足;响应于所述预设条件被满足,从所述预设条件被满足对应的第二候选实景点中确定所述目标实景点。
- 根据权利要求17所述的系统,其特征在于,所述第二确定模块还用于:判断所述目标位置与所述第二候选实景点之间的距离是否小于预设阈值。
- 根据权利要求13所述的系统,其特征在于,所述第二确定模块还用于:基于所述目标位置,获取至少一个第二候选实景点;基于预定车辆的定位和运动方向,以及所述目标位置,确定所述目标实景点;其中,所述预定车辆为前往所述目标位置与所述用户碰面的车辆。
- 根据权利要求17所述的系统,其特征在于,所述第二确定模块还用于:基于所述目标位置,获取至少一个待矫正第二候选实景点;使用矫正算法对所述至少一个待矫正第二候选实景点进行矫正,获取所述至少一个第二候选实景点。
- 根据权利要求13所述的系统,其特征在于,所述第二显示模块还用于:基于所述目标实景点和所述目标位置,确定所述第二实景图的显示方向和/或角度;基于所述显示方向和/或角度,在所述导航界面上显示所述第二实景图。
- 根据权利要求21所述的系统,其特征在于,所述第二显示模块还用于:基于所述目标实景点和所述目标位置,确定第一方向和/或第一角度;将所述第一方向和/或所述第一角度,或基于所述第一方向和/或所述第一角度旋转180°得到的所述第二方向和/或所述第二角度,作为在所述导航界面上显示所述第二实景图的方向和/角度。
- 根据权利要求21所述的系统,其特征在于,所述第二显示模块还用于:响应于显示方向和/或角度的切换指令,根据切换后的显示角度和/或方向在所述导航界面显示所述第二实景图。
- 根据权利要求13所述的系统,所述第二显示模块还用于:基于用户当前位置和所述目标位置,确定所述第二实景图的缩略或放大参数;基于所述缩略或放大参数,在所述导航界面上显示所述第二实景图。
- 一种为用户提供实景图的装置,所述装置包括处理器以及存储器,所述存储器用于存储指令,其特征在于,所述处理器用于执行所述指令,以实现如权利要求1至12中任一项所述的为用户显示实景图的方法对应 的操作。
- 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,所述计算机指令被处理器执行时,实现如权利要求1至12中任一项所述的为用户显示实景图的方法对应的操作。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010555449.XA CN111750888B (zh) | 2020-06-17 | 2020-06-17 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
CN202010555449.X | 2020-06-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021253996A1 true WO2021253996A1 (zh) | 2021-12-23 |
Family
ID=72675426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/090487 WO2021253996A1 (zh) | 2020-06-17 | 2021-04-28 | 一种为用户提供实景图的方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111750888B (zh) |
WO (1) | WO2021253996A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111750872B (zh) * | 2020-06-17 | 2021-04-13 | 北京嘀嘀无限科技发展有限公司 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
CN111750888B (zh) * | 2020-06-17 | 2021-05-04 | 北京嘀嘀无限科技发展有限公司 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120306913A1 (en) * | 2011-06-03 | 2012-12-06 | Nokia Corporation | Method, apparatus and computer program product for visualizing whole streets based on imagery generated from panoramic street views |
CN107885800A (zh) * | 2017-10-31 | 2018-04-06 | 平安科技(深圳)有限公司 | 地图中目标位置修正方法、装置、计算机设备和存储介质 |
CN110702138A (zh) * | 2018-07-10 | 2020-01-17 | 上海擎感智能科技有限公司 | 一种导航路径实景预览方法及系统、存储介质及车载终端 |
CN110763250A (zh) * | 2018-07-27 | 2020-02-07 | 宝马股份公司 | 用于处理定位信息的方法和装置以及系统 |
CN111750872A (zh) * | 2020-06-17 | 2020-10-09 | 北京嘀嘀无限科技发展有限公司 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
CN111750888A (zh) * | 2020-06-17 | 2020-10-09 | 北京嘀嘀无限科技发展有限公司 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8660316B2 (en) * | 2010-03-04 | 2014-02-25 | Navteq B.V. | Navigating on images |
CN102288180B (zh) * | 2010-06-18 | 2014-07-09 | 昆达电脑科技(昆山)有限公司 | 实时影像导航系统及方法 |
US8855901B2 (en) * | 2012-06-25 | 2014-10-07 | Google Inc. | Providing route recommendations |
CN108088450A (zh) * | 2016-11-21 | 2018-05-29 | 北京嘀嘀无限科技发展有限公司 | 导航方法及装置 |
JP2018194471A (ja) * | 2017-05-18 | 2018-12-06 | 現代自動車株式会社Hyundai Motor Company | ライフログを利用した経路案内システム及び経路案内方法 |
CN111044061B (zh) * | 2018-10-12 | 2023-03-28 | 腾讯大地通途(北京)科技有限公司 | 一种导航方法、装置、设备及计算机可读存储介质 |
CN109612484A (zh) * | 2018-12-13 | 2019-04-12 | 睿驰达新能源汽车科技(北京)有限公司 | 一种基于实景图像的路径引导方法及装置 |
CN110519699B (zh) * | 2019-08-20 | 2022-01-07 | 维沃移动通信有限公司 | 一种导航方法及电子设备 |
CN110530385A (zh) * | 2019-08-21 | 2019-12-03 | 西安华运天成通讯科技有限公司 | 基于图像识别的城市导航方法及其系统 |
-
2020
- 2020-06-17 CN CN202010555449.XA patent/CN111750888B/zh active Active
-
2021
- 2021-04-28 WO PCT/CN2021/090487 patent/WO2021253996A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120306913A1 (en) * | 2011-06-03 | 2012-12-06 | Nokia Corporation | Method, apparatus and computer program product for visualizing whole streets based on imagery generated from panoramic street views |
CN107885800A (zh) * | 2017-10-31 | 2018-04-06 | 平安科技(深圳)有限公司 | 地图中目标位置修正方法、装置、计算机设备和存储介质 |
CN110702138A (zh) * | 2018-07-10 | 2020-01-17 | 上海擎感智能科技有限公司 | 一种导航路径实景预览方法及系统、存储介质及车载终端 |
CN110763250A (zh) * | 2018-07-27 | 2020-02-07 | 宝马股份公司 | 用于处理定位信息的方法和装置以及系统 |
CN111750872A (zh) * | 2020-06-17 | 2020-10-09 | 北京嘀嘀无限科技发展有限公司 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
CN111750888A (zh) * | 2020-06-17 | 2020-10-09 | 北京嘀嘀无限科技发展有限公司 | 信息交互方法、装置、电子设备和计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111750888A (zh) | 2020-10-09 |
CN111750888B (zh) | 2021-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11692842B2 (en) | Augmented reality maps | |
US11698268B2 (en) | Street-level guidance via route path | |
CN108474666B (zh) | 用于在地图显示中定位用户的系统和方法 | |
CN107450088B (zh) | 一种基于位置的服务lbs的增强现实定位方法及装置 | |
WO2021253995A1 (zh) | 一种为用户提供实景图的方法及系统 | |
US20230342674A1 (en) | Multi-Modal Directions with a Ride Service Segment in a Navigation Application | |
US9069440B2 (en) | Method, system and apparatus for providing a three-dimensional transition animation for a map view change | |
US9500494B2 (en) | Providing maneuver indicators on a map | |
CN110753826A (zh) | 导航应用程序中的搭乘服务选项的交互式列表 | |
US20120092373A1 (en) | Method for providing information on object which is not included in visual field of terminal device, terminal device and computer readable recording medium | |
US20110288763A1 (en) | Method and apparatus for displaying three-dimensional route guidance | |
US11506509B2 (en) | Customizing visualization in a navigation application using third-party data | |
WO2021253996A1 (zh) | 一种为用户提供实景图的方法及系统 | |
US20210348939A1 (en) | Providing street-level imagery related to a ride service in a navigation application | |
WO2019227288A1 (en) | Systems and methods for parent-child relationship determination for points of interest | |
US20150178561A1 (en) | Personalized Mapping With Photo Tours | |
US9354076B2 (en) | Guiding server, guiding method and recording medium recording guiding program | |
JP2013036764A (ja) | 端末装置、アイコン出力方法、およびプログラム | |
US8869058B1 (en) | Interface elements for specifying pose information for photographs in an online map system | |
US12117308B2 (en) | Navigation directions preview | |
US8581900B2 (en) | Computing transitions between captured driving runs | |
CN114689063A (zh) | 地图建模及导航引导方法、电子设备及计算机程序产品 | |
KR102599266B1 (ko) | 데이터 사용량에 따른 길안내 장치 및 그 방법 | |
KR102316619B1 (ko) | 지도 데이터에 대해 poi에 대한 정보를 입력하는 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21825843 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/03/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21825843 Country of ref document: EP Kind code of ref document: A1 |