CN113063421A - Navigation method and related device, mobile terminal and computer readable storage medium - Google Patents

Navigation method and related device, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN113063421A
CN113063421A CN202110297597.0A CN202110297597A CN113063421A CN 113063421 A CN113063421 A CN 113063421A CN 202110297597 A CN202110297597 A CN 202110297597A CN 113063421 A CN113063421 A CN 113063421A
Authority
CN
China
Prior art keywords
current
image
destination
preset
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110297597.0A
Other languages
Chinese (zh)
Inventor
符修源
李宇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110297597.0A priority Critical patent/CN113063421A/en
Publication of CN113063421A publication Critical patent/CN113063421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The application discloses a navigation method, a related device, a mobile terminal and a computer readable storage medium, wherein the navigation method comprises the following steps: acquiring path planning information from a current position to a destination; the route planning information is obtained by using a current image shot at the current position and a destination image of a destination; and displaying the AR indication mark on the current image based on the path planning information. By the aid of the scheme, navigation reliability can be improved.

Description

Navigation method and related device, mobile terminal and computer readable storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a navigation method, a related apparatus, a mobile terminal, and a computer-readable storage medium.
Background
In daily work and life, people usually have certain navigation requirements. For example, in places with complex terrain and complex paths, such as scenic spots and shopping malls, navigation is often required to reach the destination smoothly.
At present, the existing navigation mode generally depends heavily on satellite positioning, and when the navigation mode is used in indoor places and other places, the reliability is reduced because the satellite positioning signal is weak and the navigation cannot be accurately performed. In view of the above, how to improve the navigation reliability is an urgent problem to be solved.
Disclosure of Invention
The application provides a navigation method, a related device, a mobile terminal and a computer readable storage medium.
A first aspect of the present application provides a navigation method, including: acquiring path planning information from a current position to a destination; the route planning information is obtained by using a current image shot at the current position and a destination image of a destination; and displaying the AR indication mark on the current image based on the path planning information.
Therefore, the path planning information from the current position to the destination is obtained, and the path planning information is obtained by using the current image shot at the current position and the destination image of the destination, and based on the path planning information, the AR indication mark is displayed on the current image, so that navigation can be realized only by using the current image shot at the current position and the destination image of the destination without depending on satellite positioning, and the navigation reliability can be improved.
Before acquiring the path planning information from the current position to the destination, the method further comprises: acquiring a destination image, and sending a current image and the destination image to a server; acquiring path planning information from a current position to a destination, comprising: receiving path planning information obtained by positioning and analyzing a current image and a destination image by a server by using a preset visual map; the current position and the destination are both in a preset scene, and the preset visual map is created by using video data shot by the preset scene.
Therefore, before the path planning information is obtained, the destination image is obtained, the current image and the destination image are sent to the server together, then the path planning information obtained by the server through positioning analysis of the current image and the destination image by using the preset visual map is received, the current position and the destination are both in the preset scene, the preset visual map is created by using video data shot by the preset scene, and then the positioning analysis is executed by the server, so that the load of navigation at a terminal can be favorably reduced.
Wherein obtaining the destination image comprises: receiving an image shot by a mobile terminal at a destination as a destination image; or, the first mobile terminal downloads the destination image from the internet under the condition that the destination image is not received within the preset waiting time.
Therefore, by receiving an image photographed by a mobile terminal at a destination to thereby obtain a destination image, it is possible to contribute to improvement of navigation convenience between different users; and under the condition that the destination image is not received within the preset waiting time, the destination image is downloaded from the Internet, so that the navigation robustness can be improved.
Wherein, based on the path planning information, displaying the AR indication mark on the current image, comprises: determining position information of a current position by using the current image; and updating the AR indication mark based on the path planning information and the position information, and displaying the updated AR indication mark on the current image.
Therefore, by determining the position information of the current position by using the current image, updating the AR indicator based on the path planning information and the position information, and displaying the updated AR indicator on the current image, the AR indicator can be updated according to the position information in the navigation process, thereby being beneficial to improving the navigation accuracy.
Wherein, using the current image to determine the position information of the current position comprises: positioning and analyzing the current image by using a preset local positioning mode to obtain first position information of the current position; under the condition that the current moment meets a preset condition, sending the current image to a server, and receiving second position information obtained by the server through positioning analysis on the current image; and correcting the position of the first position information by using the second position information to obtain the position information of the current position.
Therefore, the current image is positioned and analyzed by using the preset local positioning mode to obtain the first position information of the current position, and the current image is sent to the server under the condition that the current moment meets the preset condition, so that the second position information obtained by the server through positioning and analyzing the current image is received, the position of the first position information is corrected by using the second position information to obtain the position information of the current position, and the method and the device can be beneficial to improving the positioning accuracy and the navigation accuracy on the basis of reducing the communication cost with the server.
Wherein the preset conditions include: the time length from the last position correction at the current moment reaches the preset correction time length; and/or the preset local positioning mode comprises at least one of the following modes: and synchronously positioning, drawing and inertial navigation.
Therefore, the preset condition is set to include: the current time length from the last position correction reaches the preset correction time length, so that the frequency of sending the current image to the server for positioning can be adjusted according to needs, and the navigation accuracy can be maintained while the load of the mobile terminal is reduced; setting the preset local positioning mode to comprise at least one of the following: synchronous positioning, drawing construction and inertial navigation can be facilitated, and software and hardware costs of the mobile terminal can be reduced.
Wherein, the position correction is carried out on the first position information by utilizing the second position information to obtain the position information of the current position, and the method at least comprises the following steps: and using the second position information as the position information of the current position.
Therefore, when performing the position correction, the second position information is directly used as the position information of the current position, which is advantageous for reducing the complexity of the calibration process.
Wherein, after determining the position information of the current position using the current image, the method further comprises: sending the position information to a first target object; and/or detecting whether a second target object exists in the current image or not under the condition that the position information is located in the preset range of the destination, and outputting a target area of the second target object in the current image under the condition that the second target object exists in the current image.
Therefore, after the position information of the current position is determined by using the current image, the position information is further sent to the first target object, so that the position information of the current position can be timely notified to the first target object, the navigation safety can be improved, and the user experience can be improved; and under the condition that the position information is located in the preset range of the destination, whether a second target object exists in the current image is detected, and under the condition that the second target object exists in the current image, the target area of the second target object in the current image is output, so that the user can be prevented from difficultly searching for the second target object after the user reaches the position near the destination, and the user experience is favorably improved.
The obtaining of the path planning information from the current position to the destination includes: acquiring a destination image; and carrying out positioning analysis on the current image and the destination image by using a preset visual map to obtain path planning information.
Therefore, the route planning information obtained by acquiring the destination image and locally and directly utilizing the preset visual map to perform positioning analysis on the current image and the destination image can be beneficial to reducing the communication cost with the server.
And the included angle between the shooting visual angle of the current image and the shooting visual angle of the destination image and the horizontal plane is within a preset angle range.
Therefore, the shooting visual angles of the current image and the destination image are set to be within the preset angle range, the image information of the ground, the sky and the like which are invalid for positioning in the image can be greatly reduced, and the success rate and the accuracy of positioning can be favorably improved.
A second aspect of the present application provides a navigation device comprising: the route planning module is used for acquiring route planning information from a current position to a destination; the route planning information is obtained by using a current image shot at the current position and a destination image of a destination; and the path indication module is used for displaying the AR indication mark on the current image based on the path planning information.
A third aspect of the present application provides a mobile terminal, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the navigation method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the navigation method of the first aspect described above.
According to the scheme, the path planning information from the current position to the destination is obtained, the path planning information is obtained by utilizing the current image shot at the current position and the destination image of the destination, and the AR indication mark is displayed on the current image based on the path planning information, so that navigation can be realized only by the current image shot at the current position and the destination image of the destination without depending on satellite positioning, and the navigation reliability can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a navigation method of the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 3 is a flow chart illustrating another embodiment of a navigation method of the present application;
FIG. 4 is a schematic diagram of one embodiment of a visual navigation interface;
FIG. 5 is a block diagram of an embodiment of a navigation device according to the present application;
FIG. 6 is a block diagram of a mobile terminal according to an embodiment of the present application;
FIG. 7 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a navigation method of the present application. Specifically, the method may include the steps of:
step S11: acquiring path planning information from a current position to a destination; wherein the path planning information is obtained by using a current image taken at the current position and a destination image of the destination.
The steps in the embodiments of the present disclosure and the following embodiments of the present disclosure may be specifically executed by a mobile terminal, that is, a user may hold (or wear) the mobile terminal in hand to implement navigation through the mobile terminal. The mobile terminal may specifically include but is not limited to: a mobile phone, a tablet computer, smart glasses, etc., without limitation.
In an implementation scenario, the current image may be obtained by the mobile terminal by capturing a current position. For example, when a user at a current location needs to navigate to a destination, the current location may be photographed using a mobile terminal, and thus the photographed image may be taken as a current image. Specifically, capturing the current position may include: the method includes the steps of shooting a building (such as a landmark building) at the current position, shooting a guideboard at the current position, shooting a street at the current position, and the like, and the method can be specifically set according to actual application needs, and is not limited herein.
In a specific implementation scenario, the mobile terminal may run a preset navigation program, the mobile terminal may display an image input interface, the image input interface may specifically include a current position image input box, and the mobile terminal may display a current position image import interface in response to a preset operation instruction (e.g., a click operation) of the user on the current position image input box, so that the user may input a current image of a current position.
In one implementation scenario, the mobile terminal may receive an image captured by the mobile terminal at the destination as the destination image. For the convenience of distinction, the mobile terminal at the current position may be referred to as a first mobile terminal, and the mobile terminal at the destination may be referred to as a second mobile terminal, so that navigation from the current position to the destination may be realized by taking a current image of the current position and a destination image of the destination, respectively, and the second mobile terminal transmitting the destination images to the first mobile terminal, when the user of the first mobile terminal and the user of the second mobile terminal are located in two places, and then using visual positioning analysis. Specifically, the second mobile terminal may specifically include but is not limited to: a mobile phone, a tablet computer, smart glasses, etc., without limitation.
In another implementation scenario, if the destination image is not received within the preset waiting time, the first mobile terminal may further download the destination image from the internet, so that navigation robustness may be improved.
In a specific implementation scenario, the preset waiting time period may be set according to an actual application requirement, for example, may be set to 30 seconds, 1 minute and 30 seconds, and is not limited herein.
In another specific implementation scenario, the image downloaded from the internet to the destination may include, but is not limited to: the destination image is downloaded by using an internet search engine, and the setting can be specifically carried out according to the actual application requirements, for example, the destination image can be downloaded from a friend circle, a microblog and the like of a user at the destination according to an operation instruction of the user. Through the arrangement, the destination image can still be acquired under the condition that the second mobile terminal is inconvenient to communicate with the first mobile terminal, so that the navigation robustness can be favorably improved.
In one implementation scenario, in order to improve the success rate and accuracy of positioning, the included angle between the shooting visual angle of the current image and the shooting visual angle of the destination image and the horizontal plane is within a preset angle range. The preset angle range can be set according to the actual application condition. For example, in the case of high requirements on the positioning success rate and accuracy, the preset angle range may be set to be slightly smaller, for example, may be set to be 0 degree to 15 degrees; alternatively, in the case that the requirements on the positioning success rate and the navigation accuracy are relatively loose, the preset angle range may be set to be slightly larger, for example, may be set to be 0 degree to 30 degrees, and is not limited herein. By the aid of the mode, image information of the ground, the sky and the like, which is invalid for positioning, in the image can be greatly reduced, and accordingly success rate and accuracy of positioning can be improved.
In an implementation scenario, in a case that a mobile terminal at a current location (i.e., the aforementioned first mobile terminal) has hardware resources (e.g., storage resources, computing resources) meeting requirements, after the destination image is acquired, the current image and the destination image may be directly subjected to positioning analysis by using a preset visual map, so as to obtain path planning information from the current location to the destination. Compared with the mode that the current image and the destination image are uploaded to the server and the server is used for carrying out visual positioning to obtain the path planning information, the mode is favorable for greatly reducing the communication cost with the server.
In a specific implementation scenario, the preset visual map is created by using video data captured in the preset scenario, and the current location and the destination are both in the preset scenario. For example, taking a mall navigation as an example, the preset scene may be an indoor map of the mall (e.g., an indoor map of an upper floor in each area, an indoor map of each underground floor, etc.), or taking a scenic spot navigation as an example, the preset scene may be an indoor/outdoor map of the scenic spot, and so on for other scenes, which is not illustrated herein. Specifically, Feature points and descriptors thereof of video frames in the video data may be extracted by using a Feature extraction method such as orb (organized Fast and Rotated brief), SIFT (Scale-innovative Feature Transform), and based on similarity between the descriptors, a plurality of sets of matched Feature point pairs are obtained, on the basis, pose parameters of the video frames may be obtained based on the Feature point pairs, and the Feature points are mapped to a three-dimensional space by using the pose parameters, so as to construct and obtain the preset visual map.
In another specific implementation scenario, the position information of the current position and the position information of the destination can be obtained through positioning by using a preset visual map, and then the path planning information from the current position to the destination can be obtained according to the position information of the current position and the position information of the destination.
In another implementation scenario, in order to reduce the computational load of the mobile terminal at the current location (i.e., the aforementioned first mobile terminal), after the destination image is obtained, the current image and the destination image may be sent to the server, so as to receive the path planning information obtained by the server performing positioning analysis on the current image and the destination image by using the preset visual map. Specifically, the creation manner of the preset visual map may refer to the foregoing description, and is not described herein again.
In yet another implementation scenario, the mobile terminal at the current location (i.e. the aforementioned first mobile terminal) may also directly send the current image labeled with the first identification information to the server, where the first identification information may include: the location attribute (e.g., the destination or the current location) of the current image and the identities (e.g., the user name, the account number, etc.) of the opposite user and the local user, and the mobile terminal at the destination (i.e., the aforementioned second mobile terminal) may also directly send the destination image labeled with the second identification information to the server, where the second identification information may include: if the local user identity marked on the current image is consistent with the opposite user identity marked on the destination image and the opposite user identity marked on the current image is consistent with the local user identity marked on the destination image, the current image and the destination image can be bound into a group of image combinations needing visual positioning, and path planning information from the current position to the destination is determined continuously on the basis of the location attributes respectively marked on the current image and the destination image.
In another implementation scenario, as described above, the mobile terminal at the current location (i.e., the first mobile terminal) may run a preset navigation program, and the preset navigation program may specifically refer to the foregoing description, and on this basis, the image input interface may further specifically include a destination image input box, and after obtaining the destination image of the destination, a destination image import interface may be displayed in response to a preset operation instruction (e.g., a click operation) of the user on the destination image input box, so that the user may input the destination image. On this basis, a start navigation button may be further arranged on the image input interface, and is configured to upload the current image and the destination image input by the user to the server after receiving a preset operation instruction (e.g., a click operation) to the start navigation button by the user, so as to receive path planning information obtained by performing positioning analysis on the current image and the destination image by using a preset visual map by the server, or receive path planning information obtained by performing positioning analysis on the current image and the destination image by using a preset visual map stored by the mobile terminal (i.e., the first mobile terminal) at the current position.
In an implementation scenario, the path planning information may specifically include turn information and distance information, for example, the path planning information from the current location to the destination may include: the vehicle can go straight for 500 meters, turn left and continue straight for 200 meters, turn right and continue straight for 300 meters, and in other implementation scenarios, the setting can be performed according to the actual situation, which is not illustrated one by one. The above example is only one possible case in the actual implementation process, and does not limit other cases in the actual implementation process.
Step S12: and displaying the AR indication mark on the current image based on the path planning information.
In the embodiment of the present disclosure, the AR indicator may be set according to actual application needs, for example, a left turn may be represented by a left turn arrow, a right turn may be represented by a right turn arrow, a straight line may be represented by a straight line arrow, and the like, and other scenarios may be analogized, which is not illustrated herein. Therefore, the AR index identification is displayed on the current image, the display effect of the index identification can be improved, and the user experience is improved.
In an implementation scenario, during the moving process of the mobile terminal at the current location (i.e. the aforementioned first mobile terminal), the current image may be utilized to obtain the location information of the current location, so that the AR indicator may be updated based on the path planning information and the location information, and the updated AR indicator may be displayed on the current image. Still taking the aforementioned path planning information as an example, when the position information of the current image indicates that the current image has traveled straight to a position 500 meters away from the current position, a left turn indicator (e.g., a left turn arrow) may be displayed on the current image; alternatively, when the position information indicates that the user has left an arm and has traveled straight to a position of 200 meters, a right arm indicator (e.g., a right arm arrow) may be displayed on the current image, and so on, and no further example is given here. By the mode, the current image is used for positioning to obtain the position information of the current position, so that the AR indication mark is updated by combining the path planning information and the position information of the current position in the navigation process, and the navigation accuracy can be improved.
In another implementation scenario, during the moving process of the mobile terminal at the current location (i.e., the first mobile terminal), the moving speed, the moving direction, and the like of the first mobile terminal may also be obtained according to an accelerometer and the like of the first mobile terminal, so that the location information of the current location may be determined according to the time length from the current time to the start of the first mobile terminal, and then the AR indicator is updated according to the path planning information and the location information of the current location, and the updated AR indicator is displayed on the current image.
In another implementation scenario, the mobile terminal at the current location (i.e., the first mobile terminal) may run a preset navigation program, and the preset navigation program may specifically refer to the foregoing description. In addition, the preset navigation program in the embodiment of the present disclosure may also be integrated into a platform such as social software, so that the mobile terminal at the current location (i.e., the first mobile terminal) may directly receive a destination image captured by the mobile terminal at the destination (i.e., the second mobile terminal) through the platform such as social software, and then, in combination with the current image at the current location captured by the mobile terminal at the current location, the preset navigation program may automatically (or after receiving a navigation start instruction of a user) perform positioning analysis by using the current image and the destination image to obtain path planning information from the current location to the destination, and automatically (or after receiving an authorized instruction of the user) keep the camera on state to display the current image captured at the current location, and display the AR indication identifier on the current image.
In other implementation scenarios, except that the AR indicator may be displayed on the current image, navigation may be performed in a manner of voice broadcast, and the like, and the navigation may be specifically set according to the actual application needs, which is not limited herein.
In an implementation scenario, after obtaining the location information of the current location, the location information of the current location may be further sent to the first target object. By the method, the first target object can be timely informed of the position information of the current position, so that the navigation safety is favorably improved, and the user experience is improved.
In a specific implementation scenario, the first target object may include a preset emergency contact, and the emergency contact may include but is not limited to: parents, spouses, teachers, friends, etc., without limitation. For example, in the navigation process of the minor by holding (or wearing) the mobile terminal, the position information obtained by positioning in the moving process can be sent to the parents, so that the parents can timely and timely know the external action track of the child and the child in real time.
In another specific implementation scenario, the first target object may also include a user at the destination. For example, during the navigation process of the user at the current position by holding (or wearing) the mobile terminal, the position information obtained by positioning during the moving process can be sent to the user at the destination, so that the user at the destination can know the action track of the user at the current position in time and in real time.
In an implementation scenario, in a case that it is detected that the location information of the current location is within the preset range of the destination, it may be further detected whether a second target object exists in the current image of the current location, and in a case that the second target object exists in the current image, a target area of the second target object in the current image may be output. By the method, the user can be prevented from laboriously searching for the second target object after arriving near the destination, and the user experience is favorably improved.
In a specific implementation scenario, the preset range may include a circular area with the destination as a center and the preset value as a radius. The preset value may be set according to an actual situation, for example, in order to improve accuracy and success rate of detecting the second target object in the current image in the case that the destination is a place with dense people streams, such as a commercial street and a shopping mall, the preset value may be set to be slightly smaller, for example, to be 5 meters; or, in the case that the destination is a place with sparse pedestrian flow, the preset value may be set to be slightly larger, for example, may be set to be 10 meters, and the like, which is not limited herein.
In another specific implementation scenario, the second target object may specifically include a user at a destination, and the like, which is not limited herein. That is, in the case that the current location is located near the destination, the second target object in the current image may be detected, and the target area of the second target object in the current image may be output, so that searching for the second target object may be facilitated, and the user experience may be improved.
In another specific implementation scenario, a Neural Network such as a CNN (Convolutional Neural Network) may be specifically used to detect the second target object in the current image, and the specific detection process is not described herein again.
According to the scheme, the path planning information from the current position to the destination is obtained, the path planning information is obtained by utilizing the current image shot at the current position and the destination image of the destination, and the AR indication mark is displayed on the current image based on the path planning information, so that navigation can be realized only by the current image shot at the current position and the destination image of the destination without depending on satellite positioning, and the navigation reliability can be improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S13 in fig. 1. Specifically, the method may include the steps of:
step S131: position information of the current position is determined using the current image.
In an implementation scenario, in order to improve navigation efficiency and reduce communication cost with a server, a preset local positioning mode may be used to perform positioning analysis on a current image to obtain position information of a current position. Specifically, the preset local positioning manner may include at least one of: synchronous positioning and Mapping (SLAM), inertial navigation (inertial navigation). The inertial navigation determines the position of the equipment by utilizing a gyroscope and an accelerometer which are installed on the equipment, and the movement of the equipment in an inertial reference coordinate system can be determined and the position of the equipment in the inertial reference coordinate system can be calculated by utilizing the measurement data of the gyroscope and the accelerometer; the synchronous positioning and mapping refers to positioning itself through its internal sensors (e.g., gyroscope, accelerometer, etc.) and external sensors (e.g., laser sensor, visual sensor) in an unknown environment, and the specific positioning process is not described herein again.
In another implementation scenario, in order to improve the positioning accuracy, the current image may also be sent to a server, and the receiving server performs positioning analysis on the current image to obtain the position information of the current position. Specifically, the server may perform positioning analysis on the current image by using a preset visual map to obtain the position information of the current position. The preset visual map may specifically refer to the related description in the foregoing disclosed embodiments, and is not described herein again.
In another implementation scenario, in order to improve the navigation efficiency, reduce the communication cost with the server, and improve the positioning accuracy, a preset local positioning manner may be used to perform positioning analysis on the current image to obtain first position information of the current position, and when the current time meets a preset condition, the current image is sent to the server, and second position information obtained by positioning the current image by the server is received, so that the first position information is corrected by using the second position information to obtain position information of the current position. Through the arrangement, the positioning accuracy and the navigation accuracy can be improved on the basis of reducing the communication cost with the server.
In a specific implementation scenario, the preset conditions may include: the time length from the last position correction at the current moment reaches the preset correction time length. Specifically, the preset correction duration may be set according to the actual application requirement, for example, in the case that the requirement on the positioning accuracy is high, the preset correction duration may be set to be smaller, such as 30 seconds, 1 minute, and the like; alternatively, in a case where the requirement on the positioning accuracy is low, the preset correction time period may be set to be large, such as 3 minutes, 5 minutes, and the like, and is not limited herein. When the time length from the last position correction reaches the time length without the preset correction at the current moment, the accumulated error of the first position information obtained by adopting the preset local positioning mode is still within the preset range (such as 5 meters in a square circle and 10 meters in a square circle), and the correction is not needed, and at the moment, the first position information can be used as the position information of the current position; when the time length from the last position correction reaches the preset correction time length at the current moment, it can be considered that the accumulated error of the first position information obtained by adopting the preset local positioning mode exceeds the preset range, and the correction is required. Therefore, under the condition that the time length from the last position correction at the current moment does not reach the preset correction time length, the current image does not need to be sent to the server, and under the condition, the first position information can be directly used as the position information of the current position; and under the condition that the time length from the last position correction at the current moment reaches the preset correction time length, the current image can be sent to the server, and under the condition, the second position information can be directly used as the position information of the current position.
In another implementation scenario, when the mobile terminal at the current location (i.e., the first mobile terminal) has certain hardware resources (e.g., storage resources, and computing resources), the first mobile terminal may also directly use a preset visual map stored therein to locate the current image, so as to obtain the location information of the current location. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S132: and updating the AR indication mark based on the path planning information and the position information, and displaying the updated AR indication mark on the current image.
After the position information of the current position is obtained, a remaining path from the current position to the destination may be obtained based on the path planning information and the position information, so that the AR indicator may be updated based on the remaining path, and the updated AR indicator may be displayed on the current image. Still taking the path planning information in the foregoing disclosed embodiment as an example, if the location information indicates that the mobile terminal at the current location (i.e., the foregoing first mobile terminal) has traveled 200 meters straight, the initial AR indicator "500 meters straight" may be updated to "300 meters straight", and so on, which is not illustrated here. The above example is only one possible case in the actual implementation process, and does not limit other cases in the actual implementation process.
Different from the embodiment, the position information of the current position is determined by using the current image, the AR indication mark is updated based on the path planning information and the position information, and the updated AR indication mark is displayed on the current image, so that the AR indication mark can be updated according to the position information in the navigation process, and the navigation accuracy can be improved.
Referring to fig. 3, fig. 3 is a schematic flow chart diagram illustrating a navigation method according to another embodiment of the present application. Specifically, the method may include the steps of:
step S31: a current image of a current position is acquired, and an image photographed by a mobile terminal at a destination is received as a destination image.
In an implementation scenario, the image captured by the mobile terminal at the destination may specifically include an image of a destination captured by the mobile terminal, for example, a building (e.g., a landmark building) of the destination, a guideboard of the destination, a street of the destination, and the like, and may specifically be set according to actual application needs, which is not limited herein.
In one implementation scenario, the current image may be captured at the current location. For example, when a user at the current position needs to navigate to a destination, the current position may be taken to use the image taken as the current image. Specifically, capturing the current position may include: the method includes the steps of shooting a building (such as a landmark building) at the current position, shooting a guideboard at the current position, shooting a street at the current position, and the like, and the method can be specifically set according to actual application needs, and is not limited herein.
In an implementation scenario, the destination image may be received through social software, multimedia message, and the like, and may be set according to an actual application situation, which is not limited herein.
Step S32: and sending the current image and the destination image to a server, and receiving path planning information obtained by positioning and analyzing the current image and the destination image by the server by using a preset visual map.
Specifically, reference may be made to the relevant steps in the foregoing embodiments, which are not described herein again.
Step S33: and displaying the AR indication mark on the current image based on the path planning information.
Specifically, reference may be made to the relevant steps in the foregoing embodiments, which are not described herein again.
In a specific implementation scenario, a mobile terminal at the current location (i.e., the aforementioned first mobile terminal) may run a preset navigation program, the first mobile terminal may display an image input interface, where the image input interface may specifically include a current location image input box and a destination input box, and the first mobile terminal may display a current location image import interface for a user to input a current image of the current location in response to a preset operation instruction (e.g., a click operation) of the user on the current location image input box, and after obtaining the destination image, may display a destination image import interface for the user to input the destination image of the current location in response to a preset operation instruction (e.g., a click operation) of the user on the destination image input box. On the basis, a start navigation button can be further arranged on the image input interface, and is used for uploading the current image and the destination image input by the user to the server after receiving a preset operation instruction (such as clicking operation) of the start navigation button by the user, so as to receive path planning information obtained by positioning and analyzing the current image and the destination image by the server by using a preset visual map. The preset navigation program can further display a visual navigation interface, and after the path planning information is obtained, the visual navigation interface can display the current image and display the AR indication mark on the current image. Referring to fig. 4, fig. 4 is a schematic diagram of an embodiment of a visual navigation interface. As shown in fig. 4, the current image may be displayed on the visual navigation interface, and the AR indicator may be displayed on the current image, so that the immersive visual navigation may be implemented, which is beneficial to improving user experience.
Different from the embodiment, the method and the device for displaying the route planning information are characterized in that a current image of a current position is obtained, an image shot by a mobile terminal at a destination is received and serves as a destination image, so that the current image and the destination image are sent to a server, the server performs positioning analysis on the current image and the destination image by using a preset visual map to obtain the route planning information, and then an AR indication mark is displayed on the current image based on the route planning information, so that satellite positioning is not needed, and the navigation reliability can be improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a navigation device 50 according to an embodiment of the present application. The navigation device 50 comprises a path planning module 51 and a path indicating module 52, wherein the path planning module 51 is used for acquiring path planning information from a current position to a destination; the route planning information is obtained by using a current image shot at the current position and a destination image of a destination; the path indication module 52 is configured to display the AR indicator on the current image based on the path planning information.
According to the scheme, the path planning information from the current position to the destination is obtained, the path planning information is obtained by using the current image shot at the current position and the destination image of the destination, and the AR indication mark is displayed on the current image based on the path planning information, so that navigation can be realized only by using the current image of the current position and the destination image of the destination without depending on satellite positioning, and the navigation reliability can be improved.
In some disclosed embodiments, the navigation device 50 further includes an image obtaining module for obtaining a destination image, the navigation device 50 further includes an image sending module for sending the current image and the destination image to a server, and the path planning module 51 is specifically configured to receive path planning information obtained by the server performing positioning analysis on the current image and the destination image by using a preset visual map; the current position and the destination are both in a preset scene, and the preset visual map is created by using video data shot for the preset scene.
Different from the embodiment, before the path planning information is obtained, the destination image is obtained, so that the current image and the destination image are sent to the server together, then the path planning information obtained by the server through positioning analysis of the current image and the destination image by using the preset visual map is received, the current position and the destination are both in the preset scene, the preset visual map is created by using video data shot for the preset scene, and then the positioning analysis is executed by the server, so that the load of realizing navigation at the terminal can be reduced.
In some disclosed embodiments, the image obtaining module is specifically configured to receive an image captured by a mobile terminal at a destination as the destination image, or, in a case that the destination image is not received within a preset waiting time period, download the destination image from the internet.
Unlike the foregoing embodiment, by receiving an image photographed by a mobile terminal at a destination to thereby obtain a destination image, it is possible to contribute to improvement of navigation convenience between different users; and under the condition that the destination image is not received within the preset waiting time, the destination image is downloaded from the Internet, so that the navigation robustness can be improved.
In some disclosed embodiments, path indication module 52 includes a location sub-module for determining location information for a current location using a current image; the path indicating module 52 further includes an updating sub-module, configured to update the AR indicator based on the path planning information and the location information, and display the updated AR indicator on the current image.
Different from the embodiment, the position information of the current position is determined by using the current image, the AR indication mark is updated based on the path planning information and the position information, and the updated AR indication mark is displayed on the current image, so that the AR indication mark can be updated according to the position information in the navigation process, and the navigation accuracy can be improved.
In some disclosed embodiments, the positioning sub-module includes a local positioning unit configured to perform positioning analysis on a current image by using a preset local positioning manner to obtain first position information of a current position, the positioning sub-module includes an online positioning unit configured to send the current image to the server and receive second position information obtained by performing positioning analysis on the current image by the server when a current time meets a preset condition, and the positioning sub-module includes a position correction unit configured to perform position correction on the first position information by using the second position information to obtain position information of the current position.
Different from the embodiment, the current image is positioned and analyzed by using a preset local positioning mode to obtain first position information of the current position, and the current image is sent to the server under the condition that the current moment meets the preset condition, so that second position information obtained by the server through positioning and analyzing the current image is received, the first position information is subjected to position correction by using the second position information to obtain the position information of the current position, and the method is favorable for improving positioning accuracy and navigation accuracy on the basis of reducing communication cost with the server.
In some disclosed embodiments, the preset conditions include: the time length from the last position correction at the current moment reaches the preset correction time length; and/or the preset local positioning mode comprises at least one of the following modes: and synchronously positioning, drawing and inertial navigation.
Unlike the foregoing embodiment, the preset condition is set to include: the current time length from the last position correction reaches the preset correction time length, so that the frequency of sending the current image to the server for positioning can be adjusted according to needs, and the navigation accuracy can be maintained while the load of the mobile terminal is reduced; setting the preset local positioning mode to comprise at least one of the following: synchronous positioning, drawing construction and inertial navigation can be facilitated, and software and hardware costs of the mobile terminal can be reduced.
In some disclosed embodiments, the location correction unit is specifically configured to use the second location information as the location information of the current location.
Different from the foregoing embodiment, when performing the position correction, directly using the second position information as the position information of the current position can be beneficial to reducing the complexity of the calibration process.
In some disclosed embodiments, the navigation device 50 further comprises a location information sending module for sending location information to the first target object.
Different from the embodiment, after the position information of the current position is determined by using the current image, the position information is further sent to the first target object, so that the position information of the current position can be timely notified to the first target object, the navigation safety is favorably improved, and the user experience is improved.
In some disclosed embodiments, the navigation device 50 further includes a location range detection module for detecting whether the location information is within a destination preset range, and the navigation device 50 further includes a target object detection module for detecting whether a second target object exists in the current image if the location information is within the destination preset range, and for outputting a target area of the second target object in the current image if the second target object exists in the current image.
Different from the foregoing embodiment, in the case that the location information is within the preset range of the destination, whether the second target object exists in the current image is detected, and in the case that the second target object exists in the current image, the target area of the second target object in the current image is output, so that the user is prevented from laboriously searching for the second target object after reaching the vicinity of the destination, and the user experience is favorably improved.
In some disclosed embodiments, the path planning module 51 includes a destination image obtaining sub-module for obtaining a destination image, and the path planning module 51 includes a local path planning sub-module for performing positioning analysis on the current image and the destination image by using a preset visual map to obtain path planning information.
Different from the embodiment, the route planning information obtained by acquiring the destination image and locally and directly performing positioning analysis on the current image and the destination image by using the preset visual map can be beneficial to reducing the communication cost with the server.
In some disclosed embodiments, an angle between a photographing angle of view of the current image and the destination image and a horizontal plane is within a preset angle range.
Different from the embodiment, the shooting visual angles of the current image and the destination image are set to be within the preset angle range, so that the image information of the ground, the sky and the like in the image, which is invalid for positioning, can be greatly reduced, and the success rate and the accuracy of positioning can be favorably improved.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of a mobile terminal 60 according to an embodiment of the present application. The mobile terminal 60 comprises a memory 61 and a processor 62 coupled to each other, and the processor 62 is configured to execute program instructions stored in the memory 61 to implement the steps of any of the embodiments of the navigation method described above. In one particular implementation scenario, the mobile terminal 60 may include, but is not limited to: a mobile phone, a tablet computer, smart glasses, etc., without limitation.
In particular, the processor 62 is configured to control itself and the memory 61 to implement the steps of any of the above-described embodiments of the navigation method. The processor 62 may also be referred to as a CPU (Central Processing Unit). The processor 62 may be an integrated circuit chip having signal processing capabilities. The Processor 62 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 62 may be collectively implemented by an integrated circuit chip.
By the aid of the scheme, navigation reliability can be improved.
Referring to fig. 7, fig. 7 is a block diagram illustrating an embodiment of a computer readable storage medium 70 according to the present application. The computer readable storage medium 70 stores program instructions 701 executable by the processor, the program instructions 701 being for implementing the steps of any of the navigation method embodiments described above.
By the aid of the scheme, navigation reliability can be improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A navigation method, comprising:
acquiring path planning information from a current position to a destination; wherein the path planning information is obtained by using a current image taken at the current position and a destination image of the destination;
and displaying an AR indication mark on the current image based on the path planning information.
2. The method of claim 1, wherein prior to obtaining path planning information for a current location to a destination, the method further comprises:
acquiring the destination image, and sending the current image and the destination image to a server;
the acquiring of the path planning information from the current position to the destination includes:
receiving the path planning information obtained by the server performing positioning analysis on the current image and the destination image by using a preset visual map;
the current position and the destination are both in a preset scene, and the preset visual map is created by using video data shot for the preset scene.
3. The method of claim 2, wherein the obtaining the destination image comprises:
receiving an image shot by a mobile terminal at the destination as the destination image; alternatively, the first and second electrodes may be,
and downloading the destination image from the Internet under the condition that the destination image is not received within a preset waiting time.
4. The method according to any one of claims 1 to 3, wherein the displaying an AR indicator on the current image based on the path planning information comprises:
determining position information of the current position by using the current image;
and updating the AR indication mark based on the path planning information and the position information, and displaying the updated AR indication mark on the current image.
5. The method of claim 4, wherein determining the location information of the current location using the current image comprises:
positioning and analyzing the current image by using a preset local positioning mode to obtain first position information of the current position;
and under the condition that the current moment meets a preset condition, sending the current image to a server, receiving second position information obtained by the server through positioning analysis on the current image, and performing position correction on the first position information by using the second position information to obtain the position information of the current position.
6. The method according to claim 5, wherein the preset conditions include: the time length from the last position correction at the current moment reaches a preset correction time length;
and/or, the preset local positioning mode comprises at least one of the following modes: and synchronously positioning, drawing and inertial navigation.
7. The method according to claim 5, wherein the performing the location correction on the first location information by using the second location information to obtain the location information of the current location at least comprises:
and using the second position information as the position information of the current position.
8. The method of claim 4, wherein after determining the location information of the current location using the current image, the method further comprises:
sending the position information to a first target object;
and/or detecting whether a second target object exists in the current image or not under the condition that the position information is located in the preset destination range, and outputting a target area of the second target object in the current image under the condition that the second target object exists in the current image.
9. The method of claim 1, wherein the obtaining path planning information from a current location to a destination comprises:
acquiring the destination image;
and carrying out positioning analysis on the current image and the destination image by using a preset visual map to obtain the path planning information.
10. The method according to any one of claims 1 to 9, wherein an angle between a photographing angle of view of the current image and the destination image and a horizontal plane is within a preset angle range.
11. A navigation device, comprising:
the route planning module is used for acquiring route planning information from a current position to a destination; wherein the path planning information is obtained by using a current image taken at the current position and a destination image of the destination;
and the path indicating module is used for displaying an AR indicating identifier on the current image based on the path planning information.
12. A mobile terminal, characterized in that it comprises a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the navigation method according to any one of claims 1 to 10.
13. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor implement the navigation method of any one of claims 1 to 10.
CN202110297597.0A 2021-03-19 2021-03-19 Navigation method and related device, mobile terminal and computer readable storage medium Pending CN113063421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110297597.0A CN113063421A (en) 2021-03-19 2021-03-19 Navigation method and related device, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110297597.0A CN113063421A (en) 2021-03-19 2021-03-19 Navigation method and related device, mobile terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113063421A true CN113063421A (en) 2021-07-02

Family

ID=76562623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110297597.0A Pending CN113063421A (en) 2021-03-19 2021-03-19 Navigation method and related device, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113063421A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167985A (en) * 2021-11-29 2022-03-11 中国科学院计算机网络信息中心 Emergency task augmented reality application method and system based on 5G
WO2024099238A1 (en) * 2022-11-11 2024-05-16 北京字跳网络技术有限公司 Assistive voice navigation method and apparatus, electronic device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060134593A (en) * 2005-06-23 2006-12-28 엘지전자 주식회사 Method for guiding road using camera image
CN103529468A (en) * 2013-10-08 2014-01-22 百度在线网络技术(北京)有限公司 Positioning method, positioning system and mobile terminal for wearable devices and wearable device
CN105318881A (en) * 2014-07-07 2016-02-10 腾讯科技(深圳)有限公司 Map navigation method, and apparatus and system thereof
CN106646539A (en) * 2016-12-02 2017-05-10 上海华测导航技术股份有限公司 Method and system for testing GNSS (Global Navigation Satellite System) receiver heading angle
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
US20180266840A1 (en) * 2017-03-20 2018-09-20 International Business Machines Corporation Short-distance navigation provision
CN109521768A (en) * 2018-11-16 2019-03-26 楚天智能机器人(长沙)有限公司 A kind of path method for correcting error of the AGV trolley based on double PID controls
CN109542120A (en) * 2018-09-27 2019-03-29 易瓦特科技股份公司 The method and device that target object is tracked by unmanned plane
CN111076738A (en) * 2019-12-26 2020-04-28 上海擎感智能科技有限公司 Navigation path planning method, planning device, storage medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060134593A (en) * 2005-06-23 2006-12-28 엘지전자 주식회사 Method for guiding road using camera image
CN103529468A (en) * 2013-10-08 2014-01-22 百度在线网络技术(北京)有限公司 Positioning method, positioning system and mobile terminal for wearable devices and wearable device
CN105318881A (en) * 2014-07-07 2016-02-10 腾讯科技(深圳)有限公司 Map navigation method, and apparatus and system thereof
CN106646539A (en) * 2016-12-02 2017-05-10 上海华测导航技术股份有限公司 Method and system for testing GNSS (Global Navigation Satellite System) receiver heading angle
US20180266840A1 (en) * 2017-03-20 2018-09-20 International Business Machines Corporation Short-distance navigation provision
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN109542120A (en) * 2018-09-27 2019-03-29 易瓦特科技股份公司 The method and device that target object is tracked by unmanned plane
CN109521768A (en) * 2018-11-16 2019-03-26 楚天智能机器人(长沙)有限公司 A kind of path method for correcting error of the AGV trolley based on double PID controls
CN111076738A (en) * 2019-12-26 2020-04-28 上海擎感智能科技有限公司 Navigation path planning method, planning device, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167985A (en) * 2021-11-29 2022-03-11 中国科学院计算机网络信息中心 Emergency task augmented reality application method and system based on 5G
CN114167985B (en) * 2021-11-29 2022-08-12 中国科学院计算机网络信息中心 Emergency task augmented reality application method and system based on 5G
WO2024099238A1 (en) * 2022-11-11 2024-05-16 北京字跳网络技术有限公司 Assistive voice navigation method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN108413975B (en) Map acquisition method and system, cloud processor and vehicle
WO2022036980A1 (en) Pose determination method and apparatus, electronic device, storage medium, and program
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
KR101972374B1 (en) Apparatus and method for identifying point of interest in contents sharing system
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
EP2455713B1 (en) Building directory aided navigation
US20210097103A1 (en) Method and system for automatically collecting and updating information about point of interest in real space
KR20110032765A (en) Apparatus and method for providing service using a sensor and image recognition in portable terminal
JP2017198799A (en) Information collection system
US20160307370A1 (en) Three dimensional navigation among photos
CN109916408A (en) Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN113063421A (en) Navigation method and related device, mobile terminal and computer readable storage medium
CN112560769B (en) Method for detecting obstacle, electronic device, road side device and cloud control platform
CN112595728B (en) Road problem determination method and related device
WO2021088497A1 (en) Virtual object display method, global map update method, and device
US10846940B1 (en) Multi-modality localization of users
CN112788443B (en) Interaction method and system based on optical communication device
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
CN116295406A (en) Indoor three-dimensional positioning method and system
CN114608591B (en) Vehicle positioning method and device, storage medium, electronic equipment, vehicle and chip
JP2016133701A (en) Information providing system and information providing method
WO2019127320A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
WO2017003825A1 (en) Hypotheses line mapping and verification for 3d maps
WO2019188429A1 (en) Moving body management device, moving body management system, moving body management method, and computer program
JP7075090B1 (en) Information processing system and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination