CN113945220A - Navigation method and device - Google Patents

Navigation method and device Download PDF

Info

Publication number
CN113945220A
CN113945220A CN202010680666.1A CN202010680666A CN113945220A CN 113945220 A CN113945220 A CN 113945220A CN 202010680666 A CN202010680666 A CN 202010680666A CN 113945220 A CN113945220 A CN 113945220A
Authority
CN
China
Prior art keywords
information
navigation
landmark
scene image
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010680666.1A
Other languages
Chinese (zh)
Inventor
唐帅
曲彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Audi AG
Original Assignee
Audi AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Audi AG filed Critical Audi AG
Priority to CN202010680666.1A priority Critical patent/CN113945220A/en
Publication of CN113945220A publication Critical patent/CN113945220A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

A navigation method and apparatus are provided. The navigation method comprises the following steps: acquiring position information of an initial position; obtaining street view images within a preset range around the initial position; extracting visual features in the street view image; determining at least one landmark location based on the visualization features; transmitting location information of the at least one landmark location to a terminal device; determining an end position from the at least one landmark position in response to a feedback signal of the terminal device; acquiring position information of the end point position and a scene image; determining navigation information according to the position information of the starting position and the position information of the end position; and transmitting the navigation information and the scene image of the end position to the terminal device.

Description

Navigation method and device
Technical Field
The present disclosure relates to the field of navigation technology.
Background
Intelligent terminal devices, such as smart phones, tablet computers, wearable devices, vehicle-mounted intelligent terminals, and the like, are widely used in daily life. Apart from entertainment applications, navigation through these intelligent terminals is also a common function. In recent years, the development speed of intelligent terminal devices has increased, which has a positive effect on navigation through intelligent terminal devices. On the other hand, with the development of communication technology, especially the arrival of 5G mobile networks, users have higher requirements on the precision and intelligence of navigation.
Disclosure of Invention
According to an aspect of the present disclosure, a navigation method is provided. The navigation method comprises the following steps: acquiring position information of an initial position; obtaining street view images within a preset range around an initial position; extracting visual features in the street view image; determining at least one landmark location based on the visualization features; transmitting location information of at least one landmark location to a terminal device; determining an end position from at least one landmark position in response to a feedback signal of the terminal device; acquiring position information of a terminal position and a scene image; determining navigation information according to the position information of the starting position and the position information of the end position; and transmitting the navigation information and the scene image of the end position to the terminal device.
According to another aspect of the present disclosure, a navigation device is provided. The navigation device includes: a first acquisition unit configured to acquire position information of a start position; a second acquisition unit configured to acquire a street view image within a preset range around the start position; an extraction unit configured to extract a visualized feature in a street view image; a first determination unit configured to determine at least one landmark position based on the visualization feature; a first transmission unit configured to transmit position information of at least one landmark position to a terminal device; a second determination unit configured to determine an end position from among the at least one landmark position in response to a feedback signal of the terminal device; a third acquisition unit configured to acquire position information of the end position and the scene image; a third determination unit configured to determine navigation information from the position information of the start position and the position information of the end position; and a second transmission unit configured to transmit the navigation information and the scene image of the end position to the terminal device.
According to another aspect of the present disclosure, a navigation method is provided. The navigation method comprises the following steps: transmitting the position information of the starting position to a server; receiving position information of at least one landmark position from a server, wherein the at least one landmark position is determined based on visual features extracted from street view images within a preset range around a starting position; sending a feedback signal to a server, the feedback signal being used to determine an end point position from at least one landmark position; receiving navigation information and a scene image of an end position from a server; and outputs the navigation information and the scene image of the end position.
According to another aspect of the present disclosure, a navigation device is provided. The navigation device includes: a third transmitting unit configured to transmit the position information of the start position to the server; a first receiving unit configured to receive, from a server, position information of at least one landmark position determined based on a visual feature extracted from a street view image within a preset range around a start position; a fourth transmitting unit configured to transmit a feedback signal to the server, the feedback signal being used to determine an end position from the at least one landmark position; a second receiving unit configured to receive the navigation information and the scene image of the end position from the server; and an output unit configured to output the navigation information and the scene image of the end position.
According to another aspect of the present disclosure, a navigation device is provided. The navigation device includes: a processor, and a memory storing a program, the program comprising instructions that, when executed by the processor, cause the processor to perform any of the above-described navigation methods.
According to another aspect of the present disclosure, a vehicle is provided. The vehicle includes: any one of the above navigation devices.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium storing a program is provided. The program comprises instructions which, when executed by one or more processors, cause the one or more processors to perform any of the above described navigation methods.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 is a flowchart illustrating a navigation method performed by a server according to an exemplary embodiment;
fig. 2 is a flowchart illustrating a navigation method performed by a terminal device according to an exemplary embodiment;
fig. 3 is a flowchart illustrating a navigation method performed by a terminal device according to another exemplary embodiment;
fig. 4 is a block diagram showing a navigation device according to an exemplary embodiment;
fig. 5 is a block diagram showing a navigation device according to another exemplary embodiment; and
FIG. 6 is a schematic view of an application scenario for a motor vehicle according to an exemplary embodiment of the present disclosure.
Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
In a general navigation method, only the destination position and navigation information are given, and it may be difficult to judge whether the user has reached the destination only by the position information of the destination position under the condition that the user is not particularly familiar with the surrounding road conditions and streetscapes.
Fig. 1 is a flowchart illustrating a navigation method performed by a server according to an exemplary embodiment. The server may be an online server or a cloud server.
In step S101, the server obtains location information of a start location of a user (i.e., where a terminal device is located) in response to a request of the user, for example, a request input by the user using the terminal device such as a smart phone, a tablet computer, a wearable device, an e-book reader, and a vehicle-mounted communication device. The position information of the starting position can be obtained, for example, on the basis of the current GNSS positioning of the terminal device or an address manually entered by the user.
In step S103, the server acquires a street view image within a predetermined range around the start position. For example, street view images within a range of 50 meters, 100 meters, and 200 meters around the start position are acquired. The street view image may include, in particular, position information of a temporary parking space and a fixed parking space. The street view image may be pre-stored in a server, where the pre-stored street view image may be an image captured by a professional mapping vehicle. The street view image may also be an image captured by the user's terminal device, as will be described in more detail below.
In step S105, the server extracts a visualization feature in the acquired street view image. The visualization features include, for example, shape features, color features, texture features, and spatial relationship features.
In step S107, the server determines at least one landmark position based on the visualization features. The at least one landmark location is, for example, a location at which any of: outdoor billboards, guideboards, bus stop boards; natural landscapes, such as macrophytes; buildings, such as banks, hospitals, etc.; commercial signs, such as signs of shops, shopping malls; roads, and parking spaces as mentioned above, etc.
In step S109, the server transmits the location information of the at least one landmark location to the terminal device of the user. The user views the position of the user and the position of at least one landmark through the display screen of the terminal equipment. The user can give a feedback signal in the terminal device by means of a keyboard, a touch screen or voice, etc., the feedback signal reflecting the position preferred by the user in the at least one landmark position.
In step S111, the server determines an end position from the at least one landmark position, typically corresponding to a user selected preferred position, in response to the user' S feedback signal.
In step S113, the server acquires position information of the end position and a scene image. The scene image can likewise be an image captured by a professional mapping vehicle and stored in advance in a server.
In step S115, the server determines navigation information from the position information of the start position and the position information of the end position.
In step S117, the server transmits the navigation information and the scene image of the end position to the terminal device of the user.
Fig. 2 is a flowchart illustrating a navigation method performed by a terminal device according to an exemplary embodiment.
In step S201, the terminal device transmits the position information of the start position to the server. After receiving the position information of the starting position, the server extracts the visual features from the street view image within the preset range around the starting position, thereby determining the position of at least one landmark and sending the position information of the position of at least one landmark to the terminal equipment.
In step S203, the terminal device receives position information of at least one landmark position from the server. After the user views the position information of the at least one landmark position through the display device of the terminal device, a feedback signal reflecting a position preferred by the user among the at least one landmark position is input.
In step S205, the terminal device transmits a feedback signal to the server. The server determines an end position from the at least one landmark position in response to the feedback signal and determines navigation information based on position information of the start position and the end position. In addition, the server acquires a scene image of the end position and transmits the scene image and the navigation information to the terminal device.
In step S207, the terminal device receives the scene image and the navigation information transmitted by the server, and outputs the scene image and the navigation information through the display device.
According to the above navigation method of the present disclosure, the end position is selected from a plurality of landmark positions. Because the landmark positions are marked buildings or other objects which are relatively obvious, the end point position selected from the landmark positions also has a corresponding obvious mark, so that the user can more easily judge the end point particularly at the position close to the end point in the process of using navigation. In addition, due to the limitation of navigation accuracy, when the user is relatively close to the terminal, especially the last several tens of meters, the position relationship between the user and the terminal cannot be displayed on the navigation map on the terminal device particularly accurately, so that the navigation information in the conventional navigation method cannot correctly guide the user to smoothly reach the terminal. According to the navigation method disclosed by the invention, the scene image of the terminal position is displayed on the terminal equipment of the user, so that the terminal can be more favorably identified by the user using navigation.
The navigation method according to the present disclosure may be applied to various scenes. At its simplest, navigation purely ending with a certain landmark position may utilize the navigation method according to the present disclosure. In addition, the navigation method according to the present disclosure will be further explained below in conjunction with a passenger car-calling scenario. However, it will be appreciated by those skilled in the art that the navigation method is not only applicable to the above scenarios. For example, the navigation method according to the present disclosure may also be utilized in applications of drone courier. In addition, the navigation method can also be combined with the existing real-time communication software and used for the head collision of users holding different (vehicle-mounted) terminals.
In a passenger car calling scene, when a user (passenger) sends a car calling request through car calling software, the car calling software requires the user to input a car getting-on position, or the software determines the current position of the user as the car getting-on position through a positioning function of a terminal device. In the process of calling a car, if the user is located at a position which is not suitable for getting on the car, or the vehicle assigned by the car calling software cannot find the user, usually, a driver of the vehicle and the user communicate through the car calling software or mobile communication (such as a telephone), and the two parties agree on a suitable getting on place. With the development of unmanned vehicles, unmanned taxis have emerged. For an unmanned taxi without a driver, the passenger cannot communicate with the driver to contract the boarding location. After the passenger sends out the request of calling the car, if the surrounding environment of the getting-on position selected by the passenger is relatively complex, the passenger is likely to be unable to find the car called by the passenger smoothly, and the riding experience is relatively poor. In addition, the driverless taxi can not reach the boarding position selected by the passenger due to the reason that the taxi is not allowed to be parked and the like. At this point, if the passenger is unable to contact the driverless taxi, the passenger can only choose to cancel the order.
In such a scenario involving an unmanned taxi, the navigation method according to the present invention can avoid the above inconvenience, making it easier for the passenger and the unmanned taxi to converge. When a passenger needs to call a car, the passenger sends a car-calling request with position information of a starting position through a terminal device (such as a smart phone), wherein the starting position is the current position of the passenger, such as a certain shop on a commercial street. In addition, in the navigation method according to the present invention, a street view image within a predetermined range around the start position is acquired. As described above, the street view image may be an image collected by a professional mapping vehicle, pre-stored in an online server or a cloud server. However, it is also conceivable that the street view image is obtained by an imaging device of the terminal device when the terminal device has an imaging function. At this time, the passenger needs to start the camera function of the terminal device and shoot the surrounding street view by himself. Compared with the images collected by professional surveying and mapping vehicles and pre-stored in an online server or a cloud server, the street view image collected by the terminal device has the advantage that the shooting angle of the street view image basically corresponds to the view angle of the user, so that the user can identify the objects in the street view image, and some objects show different appearances when being seen from different angles. The identified object may later serve as a marker to guide the passenger to find the destination location without trouble.
Next, visual features, in particular the shape features, color features, texture features and spatial relationship features mentioned above, are extracted from the street view image, and then at least one landmark location, for example, a certain landmark building on a commercial street or the vicinity of a bus stop, is determined by comparing the visual features.
The streetscape image is first divided into a plurality of regions, and the visual characteristics of each region are compared with the visual characteristics of regions adjacent to the region. And when the compared difference is larger than a preset threshold value, determining the area as the position of the landmark. For example, the color characteristics of the area are distinct, such as a brand or building that is very brightly distinguished from the surrounding environment; can be obvious in shape characteristics, such as specially-shaped buildings, sculptures and the like. Or matching the visual characteristics with a preset model, and then judging the matching degree of the visual characteristics of each region with the preset model. And when the matching degree is greater than a preset threshold value, determining the area as the position of the landmark. The predetermined models include commercial signs (e.g., signs of various shops, billboards), road signs (e.g., guideboards, bus stops), and natural signs (e.g., macrophytes). When the matching degree of the visual characteristics of a certain region and the model reaches a threshold value, the region is considered to contain the symbolic scenes corresponding to the model, and the symbolic scenes are easily recognized by passengers. For the division of the street view image in these two methods of determining the position of the landmark, the following division methods may be considered: the street view image may be uniformly divided into a plurality of regions in a preset size, or the street view image may be divided into a plurality of regions according to the visualization characteristics. Dividing the street view image uniformly by a predetermined size is relatively simple, but there is a problem in that it is not so obvious to divide a complete sign into several regions so as to visualize its features. The processing process is relatively complicated according to the division of the visualization characteristics, but the divided region is more reasonable, and the subsequent step of determining the position of at least one landmark is more facilitated. For example, the determination method for matching with a model based on the visual feature classification is particularly suitable, because the region classified according to the visual feature is easier to correspond to a complete mark, and therefore, the region thus classified is easier to match with the model to obtain an ideal matching result. After the position of at least one landmark is determined, the position information of the position of at least one landmark is displayed to the user through a display device of the terminal equipment. According to some embodiments, an image of the scene at the location of the at least one landmark is also acquired and transmitted to the terminal device together with the location information. In this way, the user can visually recognize the images of the respective landmark positions in the street view image displayed on the terminal device, and select a desired boarding position (end position) from at least one landmark position based on the images. The user can input the boarding position selected by the user on the terminal device by means of keys, a touch screen, voice and the like, for example, by clicking a scene picture of a landmark position displayed on the touch screen to select. According to the method disclosed by the invention, the end position is determined according to the selection of the user, and the position information of the end position and the scene image are obtained. Navigation information is then determined from the position information of the start position and the position information of the end position, and the navigation information and a scene image of the end position are output by the terminal device. In order to make the user more easily recognize in finding the end position, the scene image of the end position may be displayed superimposed at the end position in the navigation information. Therefore, the user can look for the terminal position by referring to the scene image when approaching the terminal position, and the terminal position can be found more easily. In addition, the user may also find the end position by means of Augmented Reality (AR) technology. Specifically, during the process of advancing from the starting position to the end position, particularly when the end position is approached, the user takes a live image by using the camera of the terminal device. According to some embodiments, the server, upon receiving the live video captured by the terminal device, compares the captured live video with the scene image of the end position, determines the end position in the live video, and identifies the end position in the live video. By sending back to the terminal device the scene image identifying the location of the end point, the user is given a prompt on his own terminal device that the end point has been approached. The above method can be used to compare the captured live view with the scene image at the end position: extracting the visual characteristics in the real-time image and the visual characteristics of the scene image at the terminal position, and comparing the visual characteristics in the real-time image and the visual characteristics of the scene image at the terminal position; or matching the visual characteristics of the scene image of the end position with the visual characteristics in the real-time image by using the visual characteristics of the scene image of the end position as a model, and identifying the end position when the matching degree exceeds a threshold value and the end position is considered to appear in the real-time image. The user can freely switch between the two navigation methods in the process of navigating to the end position by using the terminal equipment. In some embodiments, when starting navigation, the user uses a map navigation mode to reach the vicinity of the end position according to a route planned by navigation information, and then starts an AR mode to determine the end position by comparing a shot real-time image with a scene image of the end position.
FIG. 3 illustrates another embodiment of a navigation method according to the present disclosure. In addition to the steps already described with reference to fig. 1, there is the step of modifying the position information of the end position and/or the scene image on the basis of the feedback information of the further terminal. The other steps S301-S317 correspond to the steps S101-S117 of FIG. 1 except for the steps S314A through S314B, and the same steps are not explained in detail herein. As can be seen in fig. 3, after acquiring the position information of the end position and the scene image (step S313), the server transmits the acquired position information of the end position and the scene image to another terminal device (step S314A). For example, in the application scenario described above regarding a passenger calling a taxi, the position information of the destination position and the scene image may be transmitted to an unmanned taxi that receives an order. The unmanned taxi plans a route according to the starting position and the end position and drives to the end position. If the destination position (the boarding position selected by the passenger) is not suitable for parking currently (for example, because the parking space is completely occupied), or the scene image of the destination position is found to be not consistent although the parking is available (for example, because the data in the online server or the cloud server is not updated timely), the unmanned taxi sends a feedback signal server. The feedback signal may contain new position information and/or new scene images. The server determines that the position information and/or the scene image of the current terminal position needs to be changed based on the feedback signal transmitted by the unmanned taxi, modifies the position information and/or the scene image (S) to be changed (step S314B), then determines the navigation information based on the position information of the (modified) terminal position, and transmits the navigation information and the scene image of the (modified) terminal position to the terminal device of the user. And if the server judges that the position information of the end position and the scene image do not need to be modified, the server directly determines the navigation information and sends the navigation information and the scene image of the end position to the terminal equipment of the user. Here, the steps of determining and modifying may also be performed by the terminal device.
Fig. 4 is a block diagram illustrating a navigation device according to an exemplary embodiment. The navigation device 400 includes:
a first acquisition unit 401 configured to acquire position information of a start position;
a second obtaining unit 402 configured to obtain a street view image within a preset range around the start position;
an extraction unit 403 configured to extract a visualization feature in the street view image;
a first determination unit 404 configured to determine at least one landmark position based on the visualization feature;
a first transmitting unit 405 configured to transmit location information of at least one landmark location to a terminal device;
a second determination unit 406 configured to determine an end position from the at least one landmark position in response to a feedback signal of a user;
a third acquisition unit 407 configured to acquire position information of the end position and a scene image;
a third determination unit 408 configured to determine navigation information from the position information of the start position and the position information of the end position; and
a second transmitting unit 409 configured to transmit the navigation information and the scene image of the end position to the terminal device.
Additionally, while particular functionality has been discussed above with reference to particular modular units, it should be noted that the functionality of the various modular units discussed herein may be separated into multiple modular units, and/or at least some of the functionality of multiple modular units may be combined into a single modular unit. Performing an action by a particular modular unit as discussed herein includes the particular modular unit itself performing the action, or alternatively the particular modular unit invoking or otherwise accessing another component or modular unit that performs the action (or performs the action in conjunction with the particular modular unit). Thus, a particular modular unit that performs an action can include the particular modular unit that performs the action itself and/or another modular unit that performs the action that the particular modular unit calls or otherwise accesses. For example, the first acquisition unit 401, the second acquisition unit 402 and the third acquisition unit 407 described above may be combined into a single modular unit in some embodiments.
More generally, various techniques may be described herein in the general context of software hardware elements or program modules. The various modules described above with respect to fig. 4 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the extraction unit 403, the first determination unit 404, the second determination unit 406, and the third determination unit 408 may be implemented together in a system on chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
Fig. 5 is a block diagram illustrating a navigation device according to another exemplary embodiment. The navigation means may be a terminal device or a part of a terminal device.
The navigation device 500 includes:
a third transmitting unit 501 configured to transmit the position information of the start position to the server;
a first receiving unit 502 configured to receive, from the server, position information of at least one landmark position determined based on a visual feature extracted from a street view image within a preset range around a start position;
a fourth sending unit 503 configured to send a feedback signal to the server, the feedback signal being used for determining an end point position from the at least one landmark position;
a second receiving unit 504 configured to receive the navigation information and the scene image of the end position from the server; and
an output unit 505 configured to output the navigation information and the scene image of the end position.
As described above, although the navigation method according to the present disclosure has been described in connection with an application scenario in which a passenger calls a car, the navigation method is not limited to the application scenario. The terminal equipment is not only a smart phone, but also can be a vehicle-mounted terminal. In this case, the above-described navigation method may be used for navigation during the course of a user driving a motor vehicle having an in-vehicle terminal and an unmanned vehicle (e.g., a drone for express delivery) joining. The method is further explained below with reference to the schematic view of an application scenario of a motor vehicle according to an exemplary embodiment shown in fig. 6.
Fig. 6 shows a schematic diagram of an application scenario including a motor vehicle 2010 and a communication and control system for the motor vehicle 2010. It is noted that the structure and function of the vehicle 2010 shown in fig. 6 is only one example, and the vehicle of the present disclosure may include one or more of the structure and function of the vehicle 2010 shown in fig. 6 according to a specific implementation form.
Motor vehicle 2010 may include sensor 2110 for sensing the surrounding environment. The sensors 2110 may include one or more of the following sensors: ultrasonic sensors, millimeter wave radar, LiDAR (LiDAR), vision cameras, and infrared cameras. Different sensors may provide different detection accuracies and ranges. The ultrasonic sensors can be arranged around the vehicle and used for measuring the distance between an object outside the vehicle and the vehicle by utilizing the characteristics of strong ultrasonic directionality and the like. The millimeter wave radar may be installed in front of, behind, or other positions of the vehicle for measuring the distance of an object outside the vehicle from the vehicle using the characteristics of electromagnetic waves. The lidar may be mounted in front of, behind, or otherwise of the vehicle for detecting object edges, shape information, and thus object identification and tracking. The radar apparatus can also measure a speed variation of the vehicle and the moving object due to the doppler effect. The camera may be mounted in front of, behind, or otherwise on the vehicle. The visual camera may capture conditions inside and outside the vehicle in real time and present to the driver and/or passengers. In addition, by analyzing the picture captured by the visual camera, information such as traffic light indication, intersection situation, other vehicle running state, and the like can be acquired. The infrared camera can capture objects under night vision conditions. According to some embodiments, obtaining street view images within a predetermined range around the starting location may be accomplished using the user's terminal device (i.e., the camera of motor vehicle 2010).
Motor vehicle 2010 may also include output device 2120. The output devices 2120 include, for example, a display, a speaker, and the like to present various outputs or instructions. Furthermore, the display may be implemented as a touch screen, so that input may also be detected in different ways. A user graphical interface may be presented on the touch screen to enable a user to access and control the corresponding controls. According to some embodiments, the output device 2120 described above may be utilized to display position information of at least one landmark position for a user as well as a scene image displaying navigation information, an end position. Furthermore, the user selection of the desired end position in the at least one landmark position may be achieved by a touch screen.
Motor vehicle 2010 may also include one or more controllers 2130. The controller 2130 may include a processor, such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), or other special purpose processor, etc., that communicates with various types of computer-readable storage devices or media. The computer-readable storage apparatus or medium may include any non-transitory storage device, which may be non-transitory and may implement any storage device that stores data, and may include, but is not limited to, a magnetic disk drive, an optical storage device, solid state memory, floppy disk, flexible disk, hard disk, magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a Read Only Memory (ROM), a Random Access Memory (RAM), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. Some of the data in the computer readable storage device or medium represents executable instructions used by the controller 2130 to control the vehicle. Controller 2130 may include an autopilot system for automatically controlling various actuators in a vehicle. The autopilot system is configured to control the powertrain, steering system, and braking system, etc. of the motor vehicle 2010 to control acceleration, steering, and braking, respectively, via a plurality of actuators in response to inputs from a plurality of sensors 2110 or other input devices, without human intervention or limited human intervention. Part of the processing functions of the controller 2130 may be implemented by cloud computing. For example, some processing may be performed using an onboard processor while other processing may be performed using the computing resources of the cloud. According to some embodiments, visual features in the street view image may be extracted by means of a graphics processing unit and navigation information may be determined based on the start and end positions by means of a central processing unit.
Motor vehicle 2010 also includes communication device 2140. The communication device 2140 includes a satellite positioning module capable of receiving satellite positioning signals from the satellites 2012 and generating coordinates based on these signals. The communication device 2140 also includes modules to communicate with the mobile communication network 2013, which may implement any suitable communication technology, such as current or evolving wireless communication technologies (e.g., 5G technologies) like GSM/GPRS, CDMA, LTE, etc. The communications device 2140 may also have a Vehicle-to-Vehicle (V2X) module configured to enable Vehicle-to-Vehicle (V2V) communications with other vehicles 2011 and Vehicle-to-Infrastructure (V2I) communications with the outside world, for example. In addition, the communication device 2140 may also have a module configured to communicate with the user terminal 2014 (including but not limited to a smartphone, a tablet computer, or a wearable device such as a watch), for example, via wireless local area network using IEEE802.11 standards or bluetooth. With the communications device 2140, the motor vehicle 2010 can access via a wireless communications system an online server 2015 or a cloud server 2016 configured to provide respective data processing, data storage, and data transmission services for the motor vehicle. In some embodiments, the street view image and the scene image of the end point within a predetermined range around the start point are stored in the online server 2015 or the cloud server 2016, and need to be wirelessly accessed and acquired by the communication device 2140.
In addition, the motor vehicle 2010 includes a powertrain, a steering system, a brake system, and the like, which are not shown in fig. 6, for implementing a motor vehicle driving function.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (15)

1. A navigation method, comprising:
acquiring position information of an initial position;
obtaining street view images within a preset range around the initial position;
extracting visual features in the street view image;
determining at least one landmark location based on the visualization features;
transmitting location information of the at least one landmark location to a terminal device;
determining an end position from the at least one landmark position in response to a feedback signal of the terminal device;
acquiring position information of the end point position and a scene image;
determining navigation information according to the position information of the starting position and the position information of the end position; and is
And sending the navigation information and the scene image of the end position to the terminal equipment.
2. The navigation method of claim 1, further comprising:
after the navigation information and the scene image of the end point position are sent to the terminal equipment, a real-time image shot by the terminal equipment is received, the end point position is determined in the real-time image by comparing the real-time image shot by the terminal equipment with the scene image of the end point position, and information related to the end point position is sent to the terminal equipment.
3. The navigation method of claim 1 or 2, further comprising:
and acquiring a scene image of the at least one landmark position after the at least one landmark position is determined, and sending the scene image of the at least one landmark position and the position information of the at least one landmark position to the terminal equipment together.
4. The navigation method of any one of claims 1 to 3,
based on the visualization features, determining at least one landmark location comprises:
comparing the visual characteristics, wherein the streetscape image is divided into a plurality of regions, and the visual characteristics of each region are compared with the visual characteristics of the regions adjacent to the region; and
and determining the area as the position of the landmark in response to the compared difference value being larger than a preset threshold value.
5. The navigation method of any one of claims 1 to 3,
based on the visualization features, determining at least one landmark location comprises:
matching the visual characteristics with a preset model, wherein the street view image is divided into a plurality of regions, and the matching degree of the visual characteristics of each region and the preset model is judged; and
and determining the area as a landmark position in response to the matching degree being greater than a preset threshold value.
6. The navigation method of any one of claims 1 to 5, further comprising:
and before the navigation information is determined, sending the position information of the end position and the scene image to another terminal device, and modifying the position information of the end position and/or the scene image of the end position in response to the feedback information of the other terminal device.
7. The navigation method of any one of claims 1 to 6,
obtaining location information of a starting location in response to a request by the terminal device, the request by the terminal device including a request to schedule an unmanned mobile device.
8. A navigation device, comprising:
a first acquisition unit configured to acquire position information of a start position;
a second acquisition unit configured to acquire a street view image within a preset range around the start position;
an extraction unit configured to extract a visual feature in the street view image;
a first determination unit configured to determine at least one landmark position based on the visualization feature;
a first transmission unit configured to transmit position information of the at least one landmark position to a terminal device;
a second determination unit configured to determine an end position from the at least one landmark position in response to a feedback signal of the terminal device;
a third acquisition unit configured to acquire position information of the end position and a scene image;
a third determination unit configured to determine navigation information from the position information of the start position and the position information of the end position; and
a second transmission unit configured to transmit the navigation information and the scene image of the end position to the terminal device.
9. A navigation device, comprising:
a processor, and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the navigation method of any one of claims 1 to 7.
10. A navigation method, comprising:
transmitting the position information of the starting position to a server;
receiving position information of at least one landmark position from the server, wherein the at least one landmark position is determined based on visual features extracted from street view images within a preset range around the starting position;
sending a feedback signal to the server, the feedback signal for determining an end position from the at least one landmark position;
receiving navigation information and a scene image of the end position from the server; and is
And outputting the navigation information and the scene image of the end point position.
11. The navigation method of claim 10, further comprising:
and superposing and displaying the scene image of the landmark position in the street view image in a preset range around the starting position.
12. A navigation device, comprising:
a third transmitting unit configured to transmit the position information of the start position to the server;
a first receiving unit configured to receive, from the server, position information of at least one landmark position determined based on a visual feature extracted from a street view image within a preset range around the start position;
a fourth transmitting unit configured to transmit a feedback signal to the server, the feedback signal being used to determine an end position from the at least one landmark position;
a second receiving unit configured to receive navigation information and a scene image of the end position from the server; and
an output unit configured to output the navigation information and a scene image of an end position.
13. A navigation device, comprising:
a processor, and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the navigation method of claim 10 or 11.
14. A vehicle, comprising:
a navigation device as claimed in claim 12 or 13.
15. A non-transitory computer-readable storage medium storing a program, the program comprising instructions that when executed by one or more processors cause the one or more processors to perform the navigation method of any one of claims 1-7 or 10-11.
CN202010680666.1A 2020-07-15 2020-07-15 Navigation method and device Pending CN113945220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010680666.1A CN113945220A (en) 2020-07-15 2020-07-15 Navigation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010680666.1A CN113945220A (en) 2020-07-15 2020-07-15 Navigation method and device

Publications (1)

Publication Number Publication Date
CN113945220A true CN113945220A (en) 2022-01-18

Family

ID=79326193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010680666.1A Pending CN113945220A (en) 2020-07-15 2020-07-15 Navigation method and device

Country Status (1)

Country Link
CN (1) CN113945220A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007240193A (en) * 2006-03-06 2007-09-20 Denso Corp Landmark notification system, vehicle-mounted navigation apparatus, and vehicle-mounted navigation system
CN102788588A (en) * 2011-05-19 2012-11-21 昆达电脑科技(昆山)有限公司 Navigation system and navigation method therefor
CN105300392A (en) * 2014-05-27 2016-02-03 中国电信股份有限公司 Method, device and system for displaying planned routes in street view map
CN105910613A (en) * 2016-03-30 2016-08-31 宁波元鼎电子科技有限公司 Self-adaptive navigation method and system for walking based on virtual reality
CN108827307A (en) * 2018-06-05 2018-11-16 Oppo(重庆)智能科技有限公司 Air navigation aid, device, terminal and computer readable storage medium
CN109073404A (en) * 2016-05-02 2018-12-21 谷歌有限责任公司 For the system and method based on terrestrial reference and real time image generation navigation direction
CN110686694A (en) * 2019-10-25 2020-01-14 深圳市联谛信息无障碍有限责任公司 Navigation method, navigation device, wearable electronic equipment and computer readable storage medium
CN110809706A (en) * 2017-12-15 2020-02-18 谷歌有限责任公司 Providing street level images related to ride services in a navigation application

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007240193A (en) * 2006-03-06 2007-09-20 Denso Corp Landmark notification system, vehicle-mounted navigation apparatus, and vehicle-mounted navigation system
CN102788588A (en) * 2011-05-19 2012-11-21 昆达电脑科技(昆山)有限公司 Navigation system and navigation method therefor
CN105300392A (en) * 2014-05-27 2016-02-03 中国电信股份有限公司 Method, device and system for displaying planned routes in street view map
CN105910613A (en) * 2016-03-30 2016-08-31 宁波元鼎电子科技有限公司 Self-adaptive navigation method and system for walking based on virtual reality
CN109073404A (en) * 2016-05-02 2018-12-21 谷歌有限责任公司 For the system and method based on terrestrial reference and real time image generation navigation direction
CN110809706A (en) * 2017-12-15 2020-02-18 谷歌有限责任公司 Providing street level images related to ride services in a navigation application
CN108827307A (en) * 2018-06-05 2018-11-16 Oppo(重庆)智能科技有限公司 Air navigation aid, device, terminal and computer readable storage medium
CN110686694A (en) * 2019-10-25 2020-01-14 深圳市联谛信息无障碍有限责任公司 Navigation method, navigation device, wearable electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11721098B2 (en) Augmented reality interface for facilitating identification of arriving vehicle
JP6418266B2 (en) Three-dimensional head-up display device that displays visual context corresponding to voice commands
US10068377B2 (en) Three dimensional graphical overlays for a three dimensional heads-up display unit of a vehicle
CN109949439B (en) Driving live-action information labeling method and device, electronic equipment and medium
CN109817022B (en) Method, terminal, automobile and system for acquiring position of target object
CN109064763A (en) Test method, device, test equipment and the storage medium of automatic driving vehicle
JP2005268847A (en) Image generating apparatus, image generating method, and image generating program
CN108139202A (en) Image processing apparatus, image processing method and program
CN106164931B (en) Method and device for displaying objects on a vehicle display device
JP7205204B2 (en) Vehicle control device and automatic driving system
CN110431378B (en) Position signaling relative to autonomous vehicles and passengers
CN108028883A (en) Image processing apparatus, image processing method and program
KR20240019763A (en) Object detection using image and message information
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN114096996A (en) Method and apparatus for using augmented reality in traffic
KR102406490B1 (en) Electronic apparatus, control method of electronic apparatus, computer program and computer readable recording medium
US11956693B2 (en) Apparatus and method for providing location
CN110770540B (en) Method and device for constructing environment model
CN114677848B (en) Perception early warning system, method, device and computer program product
CN113945220A (en) Navigation method and device
CN111661054B (en) Vehicle control method, device, electronic device and storage medium
CN114755670A (en) System and method for assisting a driver and a passenger in positioning each other
WO2020100540A1 (en) Information processing device, information processing system, information processing method, and program
JP2022056153A (en) Temporary stop detection device, temporary stop detection system, and temporary stop detection program
CN106681000B (en) Augmented reality registration device and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination